|
NTSB BOARD MEETING |
This accident did not have to happen. The accident chain started with something as basic as inadequate tire inflation and ended in tragedy. As the Polish poet Stanislaw Lec wrote, "The weakest link in the chain is the strongest. It can break the chain." This accident illustrates, all-too-clearly, what can happen when one link in the chain breaks.
This entirely avoidable crash should reinforce to everyone in the aviation community that there are no small maintenance items because every time a plane takes off, lives are on the line. No one wants to be the weakest link - that is why we must have robust maintenance standards, rigorous design requirements, great training, and well-executed decisions in the cockpit.
The captain in this accident was required to make a decision - in an instant -about whether to continue with the takeoff - consistent with her training - or to abort the takeoff after she had reached a speed at which she could not stop the plane on the remaining runway. That decision, made in the span of about 2 seconds, took her life and the lives of 3 others; and left 2 others seriously injured. But she was not alone in rejecting a high-speed take off after passing V1. History has shown that others, experienced pilots among them, have made the same choice. The data shows us that over half of the rejected takeoff accidents and incidents occur after the airplane speed passes V1.
Pilots are trained to avoid a rejected takeoff at high-speed unless the aircraft cannot fly, but unfortunately in a split second, pilots do not, and perhaps, are not prepared to, accurately evaluate whether their aircraft cannot fly. Yet data cited in the Safety Board's 1990 Special Investigation Report on Runway Overruns following High Speed Rejected Takeoffs notes that, out of 69 Boeing RTO events, in over half the crew incorrectly interpreted the cause.
High-speed rejected takeoffs are high risk, and they can have catastrophic results. It is therefore not surprising that-to reduce the risks of a rejected takeoff-pilots are trained in the Go/No-Go decision-making process. V1 marks the end of the Go/No-Go decision. If you have not applied the brakes by V1, you have made the Go decision. Tragically, even though this is what we know, and what is taught, in this accident the high cost of the lesson was demonstrated once again.
Pilots must be prepared to make the difficult decision, consistent with their training, and not be fearful to commit, but they must have the training so that they can do what they are taught to do. They must train like they fly, and fly like they train. To quote Johann von Goethe, "Knowing is not enough, you must apply it".
The FAA had information about Learjet 60 design deficiencies prior to this accident, yet it did not fully assess and incorporate that data into its certification process for changed aeronautical products. As we discussed today, neither the FAA nor Learjet, adequately reviewed the Learjet 60's design after an accident in Alabama in 2001 in which two people were seriously injured and the airplane was destroyed.
While the modifications put into place by the FAA after the 2001 accident provided additional protection against uncommanded forward thrust upon landing, no such protection was provided for a rejected takeoff. But whether the damage occurs on landing or takeoff, malfunctioning thrust reversers can be just as deadly. The data was there; it showed a problem with the thrust reversers when there was damage to a sensor on the landing gear and additional data existed to demonstrate that tire failures were a common factor in RTOs. Yet, the FAA and Learjet did not connect the dots.
This accident investigation confirms what the Safety Board concluded in its 2006 study on the aircraft certification process. That is, if we are serious about safety, we must have programs in place to ensure the continued airworthiness of aircraft and to assess the risks to safety-critical systems. An effective program can affect changes throughout the life of an airplane and can ensure that ongoing decisions about design, operations, maintenance, and continued airworthiness consider new data, service history, lessons learned and new knowledge.
However, as this accident tragically demonstrates, FAA certification standards that allow grandfathering of derivative designs are in dire need of an overhaul. The Learjet 60 was certificated in 1993 based on some requirements that were in effect in 1965. That means that these standards were nearly 30 years old. A lot had been learned about the airworthiness in three decades, yet the FAA did not require the manufacturer to demonstrate that they had incorporated all of that new information into safety improvements. Had it done so, in 1993, or perhaps in 2001, this accident may not have happened.
This is particularly troubling because industry had accepted a process for conducting this ongoing assessment (SAE ARP5150), and in 2006, the Safety Board had issued a safety recommendation (A-06-38 which is classified in an unacceptable status) to require such a program. The report adopted today concludes that "had the Federal Aviation Administration adopted the procedures described in SAE International's SAE ARP5150, Safety Assessment of Transport Airplanes in Commercial Service, to require a program for the monitoring and ongoing assessment of safety-critical systems, the FAA may have recognized, based on problems reported after previous incidents and an accident, that the Learjet 60's thrust reverser system design was deficient and thus may have required appropriate modifications before this accident occurred."
If we are serious about safety, a dedicated program to continually monitor and assess risks is critical. This reminds me of a news article I read regarding the recent safety recall by Toyota in response to accounts of stuck gas pedals and unintended acceleration in Toyota automobiles. The article revealed that driver complaints about speed control problems were higher for Toyota than for other big automakers - and that the pattern had been almost unbroken since 2004. Toyota and NHTSA both had the data; they just didn't see the problem until it was too late.
Even if we have the data, we have to know what questions to ask. Unfortunately, unless we know what we are looking for, we may be looking for zebras when we should really be looking for horses. When we determine that the value of safety data is predictive, the problem will always be that we don't know what we don't know. The FAA and many others in the safety community believe that data programs are the way of the future. They might say that our findings from this one accident are not reflective of industry-wide problems. Yet, we in the business of forensic accident investigations are compelled to ask, of what use is the data that is collected if it is not used to identify the right problems? Until we identify the problem, we cannot implement solutions.
There is an old saying, "What you don't know, can't hurt you." When it comes to aviation safety, I suggest the opposite - what we don't know CAN hurt us. Whether through accident investigations or data programs, we must draw out the knowledge, and we must share what we know - though it may be up to others to implement our recommendations.
s you enter the doors of our Training Center in Ashburn, Virginia, you'll see the quote, "From tragedy we draw knowledge to improve the safety of us all." At the Safety Board, we focus on lessons learned, recognizing that we can achieve safety improvements as a consequence of adversity.
Sadly, there is one final thing that the data has borne out time and again -- merely communicating is not enough. Communication has to be translated into action. If changes are not made as a result of our investigations, history is bound to repeat itself.