Skip Ribbon Commands
Skip to main content
Bookmark and Share this page

Speeches

Presentation to the ALPA Board of Directors Conference, Washington, DC
Christopher A. Hart
Washington, DC
10/18/2016

​Thank you for that kind introduction, and thanks to the Air Line Pilots Association for inviting me to speak on behalf of the National Transportation Safety Board.

As most of you know, the NTSB investigates accidents in transportation, determines their causes, and makes safety recommendations to prevent recurrences. So my remarks today come from the context of our experience as accident investigators.

Today I would like to talk to you about automation. The good news is that there is more automation; and the bad news is that there is more automation.  Airline pilots have a long history of transitioning to more and more automated operations. In fact, according to a 2015 article in the New York Times, Boeing 777 pilots reported that they spent just seven minutes manually piloting their planes in a typical flight. Airbus pilots spent half that time. Experience has shown that automation can improve not only safety, but reliability, efficiency, and productivity.  The problem is that there is also a downside.

All of you know first-hand that automation is becoming more prevalent in flight operations.  Yet human pilots are still crucial to the process. To determine why this is the case, it is instructive to explore both the theory of full automation, and the present state of increasing, but not complete, automation. Completely manual operation is unambiguous, as is completely automated operation – in both instances, it is clear who is in charge.  When the automation is combined with human operators, however, it is not always clear who is in charge.

The theory of automation in the flight deck is that if there is no pilot, there will be no pilot error.  Removing the pilot would also address at least four human-based issues on the NTSB’s Most Wanted List of Transportation Safety Improvements – fatigue; distractions; impairment; and medical fitness for duty.  Unfortunately, the theory is deficient in several respects.

The first defect is that the theory assumes that the automation is working as designed.  But what if the automation fails?  Will it fail in a way that is safe?  If it cannot be guaranteed to fail safe, will it inform the pilot of the failure in a timely manner, and will the pilot then be able to take over successfully?  The fatal Metro accident that occurred here in Washington in 2009 resulted from a failure in the automation that the operator was unaware of until it was too late. 

Another defect is that the theory fails to address what happens when the automation encounters unanticipated situations. The automated system’s knowledge is only as complete as its programming. If a situation has never been encountered before, and was not imagined by human programmers, the automation might be unable to respond.

Last but not least, removing the pilot does not remove other sources of human error. Humans are still involved in designing, manufacturing, and maintaining the aircraft; and human error in these steps is likely to be more systemic in its effect – possibly involving several aircraft – and more difficult to find and correct.  For example, we investigated a collision of a driverless airport people mover that resulted in part from improper maintenance.

Beyond designing, manufacturing, and maintaining the aircraft, humans are involved in the system in other ways, e.g., as pilots of other airplanes, air traffic controllers, and airport personnel who are involved in airport functions such as wildlife hazard mitigation and clearing contaminated runways. Each of these points of human engagement presents additional opportunities for human error. 

The most fundamental lesson that we have learned from our accident investigation experience is that introducing automation into complex human-centric systems can be very challenging.  The problems that we have seen thus far from increasing automation include increasing complexity, degradation of skills, complacency, and the potential loss of professionalism.

For example, in the 2013 crash of Asiana flight 214 in San Francisco, the pilots had become confused by the behavior of the airplane’s automated systems when used in various modes. The pilot’s mode selection caused the auto-throttle to disengage, and the pilot incorrectly assumed that the auto throttle would “wake up” and maintain the desired speed. As a result Asiana 214 came in low and slow, and crashed into a seawall while landing.

This crash illustrated not only confusion attributable to the complexity of the automation in the airplane, but also the degradation of the pilot’s skills to the extent that he was unable to complete a manual approach and landing on an 11,000 foot PAPI-equipped runway on a clear day with negligible wind. Asiana’s automation policy emphasized the full use of all automation and did not encourage manual flight during line operations.

In addition to automation confusion and de-skilling, our investigation experience has shown that the challenge of complacency can arise when human pilots operate aircraft with more automation and with increasingly reliable automation. Complacency is particularly insidious because it becomes more pronounced the safer an operation becomes.  The more the pilots become accustomed to automation safely operating the airplane, the more concerted effort it takes to keep pilots engaged.

In other words, there can be too much of a good thing.  In addition to complacency, can too much automation, when it is working properly, undermine professionalism? 

An example of this is many subway systems, in which the system is largely automated.  The automation takes the train from the station, maintains appropriate speeds, maintains adequate distance from other trains, stops at the right place in the next station, and then opens the doors.  The operator performs only one function – closing the doors. 

When the operator’s only function is to close the doors because everything else is automatic, does that operator love his or her work and enjoy the pride of accomplishment, or will he or she just be there to get a paycheck?  If the paycheck is the primary objective, what does that do to professionalism?  Unlike the problems that occur when the automation fails, this problem occurs when the automation is performing correctly.

Our investigations of automation-related accidents have revealed two extremes.  On one hand, the human operator is the least predictable and most unreliable part of the system.  On the other hand, the human operator is potentially the most adaptable part of the system when failures occur or unanticipated situations are encountered.

One example of the human as the most adaptable part of the system was the crash landing of United Airlines flight 232 in Sioux City, Iowa, in 1989. As you probably remember, the tail-mounted engine failed catastrophically, sending fan rotor parts through all three of the airplane’s hydraulic systems.  We have all seen that crash, as filmed through a chain-link fence, and it is truly incredible that without any hydraulics, the pilots were able to use differential thrust of the two remaining engines to maneuver the aircraft to an airport, which very few pilots were subsequently able to do in a simulator; and it is even more incredible that more than 60% of the passengers and crew survived.

Another example of the human as the most adaptable part of the system was Captain Chesley Sullenberger’s amazing landing in the Hudson River when his airplane suddenly became a glider because both of its engines were taken out by birds. That flight crew had never been trained to glide an airliner; they had never been trained to land in the water; and they had never been trained to land without power.  Despite that lack of training, Captain Sullenberger and his crew were able to save the day by quickly and calmly assessing the situation, determining that a ditching in the Hudson was the best course of action, and executing the ditching successfully.

Some of you may not be aware that there was an automation aspect to the landing in the Hudson.  In order to minimize the vertical impact velocity, Captain Sullenberger had planned to pull the nose up to the upper alpha limit during the flare.  Unbeknownst to him, however, the airplane’s phugoid damping software stopped his nose-up command 3 ½ degrees short of the upper alpha limit.  Consequently the vertical impact speed was higher than it would otherwise have been, and the rear fuselage structure was breached to the extent that a flight attendant seated in the rear was injured and water entered the airplane.  As a result, automation that was intended to improve safety and comfort actually hindered the most adaptable part of the system, the human pilot.

On the other hand, the Colgan crash near Buffalo, New York, in 2009, was a case of human pilot as the most unreliable and unpredictable part of the system.

Due to the pilot’s inattention and lack of situational awareness, he placed the airplane in a situation that caused the stick shaker and stick pusher to activate, whereupon he responded inappropriately and caused an aerodynamic stall and crash.  FAA records indicated that this pilot had previously received four certificate disapprovals, one of which he did not disclose to Colgan. Furthermore, Colgan’s training records indicated that while he was a first officer, he needed additional training after three separate checkrides. 

In this instance, the pilot should never have been in the front of a transport airplane, but the filters that are intended to weed out pilots such as him failed.

Another textbook example of the human as the most unreliable part of the system was Air France Flight 447 from Rio de Janeiro to Paris in 2009.  In that instance, however, those pilots were largely set up to fail.

After Air France 447 reached its cruise altitude at night over the Atlantic and began approaching distant thunderstorms, the captain left the cockpit for a scheduled rest break, giving control to two less experienced pilots. At the ambient temperature and with abundant super-cooled water from the nearby thunderstorms, the pitot tube heaters were overwhelmed, the pitot tubes became clogged, and the airplane lost its airspeed information.

The loss of airspeed information caused the autopilot and the auto-throttle to fail, and the pilots had to fly the airplane manually.  The loss of airspeed information also disabled the alpha protection.  The pilots responded inappropriately to the loss of these systems, stalled the airplane, and crashed into the ocean. 

Several factors played a role in this crash.  To begin with, the three pitot tubes did not effectively provide redundancy, because they were all taken out by the same cause.  In addition, the pilots had not experienced this type of failure before, even in training, and they were unable to figure out what happened. 

The error messages that the pilots received did not help them determine cause (blocked pitot tubes) and effect (failed systems that need airspeed information to work).  Had the pilots known that the cause of the error messages was loss of airspeed information, they may have known to revert to pitch and power.  Crew resource management also failed: The pilot flying did not communicate that he had pulled the stick back, and the pilot monitoring did not ask, and because the airplane was equipped with side sticks rather than yokes, the pilot monitoring did not know that the pilot flying was pulling back on the stick.

Finally, automatic pilots are mandatory at cruise altitudes, so the pilots had never flown manually at that altitude before, even in in the simulator, and they had never had any stall recognition or recovery training at that altitude.

As an aside, the pitot tubes had frozen before in that type of airplane, but the pilots in the previous encounters responded successfully.  Consequently, the fleet, including the accident airplane, was scheduled for the installation of more robust heaters, but given the previously successful encounters, an immediate emergency replacement was not considered to be necessary. 

Increasing automation reduces the work-load of the human operator. When all is going well, automation brings unparalleled safety, reliability, productivity, and efficiency. However, when something goes wrong or something unexpected happens, a human pilot can save the day with much-needed adaptability.  The challenge is how to reap the benefits of automation while minimizing its potential downsides.  Let me suggest that one of the best ways is through collaboration.

For the past two decades, the aviation industry has been demonstrating the power of collaboration through the Commercial Aviation Safety Team, or CAST.

 Aviation in the U.S. has become amazingly safe. In fact, the last fatal crash of a U.S. airliner was the 2009 Colgan crash discussed above. The last fatal crash of any airliner in the U.S. was the 2013 crash of Asiana flight 214.

Although automation has played an important role in the industry’s continuing safety improvement, much of the industry’s exemplary safety record is attributable to collaboration.  In the early 1990’s, after the industry’s accident rate had been declining rapidly, the accident rate began to flatten on a plateau.  Meanwhile, the Federal Aviation Administration was predicting that the volume of flying would double in 15-20 years.

The industry became very concerned that if the volume doubled while the accident rate remained the same, the public would see twice as many airplane crashes on the news.  That caused the industry to do something that, to my knowledge, has never been done before or since at an industry-wide level in any other industry – they pursued a voluntary collaborative industry-wide approach to improving safety.  This occurred largely because David Hinson, who was then the Administrator of the FAA, realized that the way to get off the plateau was not more regulations or a bigger stick for the regulator, but figuring out a better way to improve safety in a complex aviation system.

The voluntary collaborative CAST process brings all of the players –airlines, manufacturers, pilots, air traffic controllers, and the regulator – to the table to do four things:  Identify the potential safety issues; prioritize those issues – because they would be identifying more issues than they had resources to address; develop interventions for the prioritized issues; and evaluate whether the interventions are working. 

This process has been an amazing success.  It resulted in a reduction of the aviation fatality rate, from the plateau on which it was stuck, by more than 80% in less than 10 years.  This occurred despite the fact that the plateau was already considered to be exemplary, and many thought that the rate could not decline much further.  The process also improved not only safety but also productivity, which flew in the face of conventional wisdom that improving safety generally decreases productivity.  In addition, a major challenge of making improvements in complex systems is the possibility of unintended consequences; yet this process generated very few unintended consequences.  Last but not least, the success occurred largely without generating new regulations.

The moral of this collaboration success story is that everyone who is involved in a problem should be involved in developing the solution.

How is this relevant to automation?  Collaboration means that the automation designers should bring in pilots during the design phase, long before the first human test pilot leaves the ground in a new airplane.

Collaboration means that the regulator should meet with pilots and controllers to obtain operational feedback. In turn, that feedback can be used by airlines, flight schools, and manufacturers to improve training; by manufacturers to improve design; and by air traffic control to improve operations as appropriate.

By embracing this collaborative process, pilots can help industry anticipate and correct problems related to the critical area of interaction between themselves and their automated systems.  The ultimate human interface allows the pilot to play his role as the most adaptable part of the system, while minimizing the pilot’s impact as the most unreliable part of the system.

Will airliners ever be completely automated? Fully automated vehicles already exist in some modes, such as airport trams and some subway systems, and we’ve seen several news stories recently about increasing automation in cars, but accidents such as the landing in the Hudson are the reason that complete automation in airliners is very unlikely any time soon.

Meanwhile, if the industry hopes to continue improving safety, it must continue to enhance its understanding of the human-automation interface, through ever-better collaboration.