Skip Ribbon Commands
Skip to main content
Bookmark and Share this page

Speeches

Remarks on driverless cars, National Press Club luncheon
Christopher A. Hart
Washington, DC
6/30/2016

Thank you for inviting me to speak on behalf of the National Transportation Safety Board.  It is a privilege and an honor to be here.
Today I would like to speak about driverless cars, and how the NTSB can help the process of bringing them onto our streets and highways.  I don’t mean to suggest that we’re looking for work – our plate is already very full – but I am suggesting that we could be a valuable resource.  To put my remarks in context, I would like to describe what the NTSB does.
The NTSB is an independent federal agency that investigates accidents in all modes of transportation to determine what caused them and to make recommendations to prevent them from happening again.  Our primary product is recommendations.  Our world-class investigators and analysts don’t like to give up until they have the answer, and the recommendations that they create are so compelling that the recipients respond favorably more than 80% of the time, even though they are not required to.  We like to think that the implementation of our recommendations has helped to make transportation safer for all of us.
So my remarks today come from the context of our experience as accident investigators.
Driverless cars are coming, and their potential for improvement is amazing.  First and foremost, driverless cars could save many, if not most, of the 32,000 lives that are lost every year on our streets and highways – a very tragic and unacceptable number that has been decreasing for several years but has recently taken a turn in the wrong direction.
Driverless cars could also increase the amount of traffic that our roads can safely carry because, instead of maintaining a car length separation for every 10 mph, as I’m sure we all do, driverless cars could reduce that separation.  Stay tuned for what other changes might be possible.
How might that happen?  Ideally, with automation. 
Most crashes on our roads are due to driver error.  The theory of driverless cars is that if there is no driver, there will be no driver error.  Ideally, removing the driver would address at least four issues on the NTSB’s Most Wanted List of Transportation Safety Improvements – fatigue; distractions; impairment; and fitness for duty.  The automation in driverless cars would presumably also address a fifth item on our list, namely, improved collision avoidance technologies. 
Decades of experience in a variety of contexts has demonstrated that automation can improve safety, reliability, productivity, and efficiency.  That experience has also demonstrated that there can be a downside.  As noted by Prof. James Reason, who is a world-renowned expert in complex human-centric systems:
In their efforts to compensate for the unreliability of human performance, the designers of automated control systems have unwittingly created opportunities for new error types that can be even more serious than those they were seeking to avoid.
Our investigation experience provides three lessons learned that support Prof. Reason’s statement.  The first is that the theory of removing human error by removing the human assumes that the automation is working as designed; so the question is what if the automation fails.  Will it fail in a way that is safe?  If it cannot be guaranteed to fail safe, will the operator be aware of the failure in a timely manner, and will the operator then be able to take over to avoid a crash? 
An example of the automation failing without the operator’s knowledge occurred here in Washington – you may remember the Metro crash near the Fort Totten Station in 2009 that tragically killed the train operator and 8 passengers.  In that accident, a train temporarily became electronically invisible, whereupon the symbol of the train disappeared from the display board in the dispatch center.  When a train became invisible on the board, an alarm sounded.  This alarm, however, sounded several hundred times a day, so it was largely ignored.
Unfortunately, when the train became electronically invisible, there was no alarm in the train behind it regarding the electronic disappearance of the preceding train.  Thus, the operator of the train behind was unaware of it.  Instead, based upon the electronically unoccupied track ahead, the automation in the train behind began accelerating to the maximum speed for the area.  By the time the operator saw the stopped train and applied the emergency brake after coming around a curve – which limited her sight distance – it was too late.
Another lesson learned in support of Prof. Reason’s statement is that even if the operator is removed from the loop, humans are still involved in designing, manufacturing, and maintaining the vehicles, as well as the streets and highways they use.  Each of these points of human engagement presents opportunities for human error.  Moreover, human error in these steps is likely to be more systemic in its effect – possibly involving several vehicles – and more difficult to find and correct.  An example of this lesson learned is the collision of an automated – driverless – people mover into a stopped people mover at Miami International Airport in 2008.  That collision was caused largely by improper maintenance.
The most fundamental lesson learned from our accident investigation experience in support of Prof. Reason’s statement is that introducing automation into complex human-centric systems can be very challenging.  Most of the systems we have investigated are becoming increasingly automated but are not fully automated.  As a result, we have seen that the challenges can be even more difficult in a system that still has substantial human operator involvement and is not completely automated.
Situations involving partial automation with substantial human operator involvement have demonstrated two extremes.  On one hand, the human is the most unreliable part of the system.  On the other hand, if the system encounters unanticipated circumstances, a highly-trained proficient human operator can save the day by being the most adaptive part of the system.   An example of the human operator saving the day is Captain Sullenberger’s amazing landing in the Hudson River when his airplane suddenly became a glider because both of its engines were taken out by birds.  In stark contrast, a textbook example of the complexities of the human-automation interface, in which the human was the most vulnerable part of the system, is Air France Flight 447 from Rio de Janeiro to Paris in 2009. 
After Air France 447 reached its cruise altitude of 37,000 feet at night over the Atlantic and began approaching distant thunderstorms, the captain left the cockpit for a scheduled rest break, giving control to two less experienced pilots.  The airplane had pitot tubes that project from the fuselage to provide information about how fast it was going.  Airspeed information is so important that there were three pitot tubes – for redundancy – and the pitot tubes were heated to ensure that they were not disabled by ice. At the ambient temperature of minus 50-60 degrees, and with abundant super-cooled water from the nearby thunderstorms, the pitot tube heaters were overwhelmed, and the pitot tubes became clogged with ice, so the airplane no longer knew how fast it was going.
The loss of airspeed information caused several systems to quit, including the automatic pilot that was flying the airplane and the automatic throttle that was maintaining the selected speed.  As a result, the pilots suddenly had to fly the airplane manually.  The loss of airspeed information also rendered inoperative the automatic protections that prevent the airplane from entering into an aerodynamic stall, in which the wings no longer produce lift.  The pilots responded inappropriately to the loss of these systems, and the result was a crash that was fatal to all 228 on board. 
As with most accidents that we investigate, several factors played a role.  To begin with, the redundancy of having three pitot tubes was not effective because all three were taken out by the same cause.  In addition, the pilots had not experienced this type of failure before, even in training, where the problem can be simulated in very realistic simulators, and they were unable to figure out what happened.  Finally, use of the automatic pilot is mandatory at cruise altitudes, so the pilots had not flown manually at that altitude before, even in training in the simulator.  This is important because the airplane behaves very differently at cruise than it does at low altitudes, such as during takeoff and landing.  Other operational and design issues compounded the problem and led to a tragic outcome. 
As an aside, the pitot tubes had frozen before in that type of airplane, but the pilots in those previous encounters responded successfully.  Consequently, the fleet, including the accident airplane, was scheduled for the installation of more robust heaters, but given the previously successful encounters, an immediate emergency replacement was not considered to be necessary. 
With that background on how automation can be both the good news and the bad news, let me turn to how the NTSB can help inform the process of moving toward driverless cars.  First, as I have just explained, we offer considerable experience regarding the introduction of automation into complex human-centric systems. 
Most of our investigations involve relatively structured systems with highly trained professional operators who have various requirements regarding proficiency, fatigue, impairment, distraction, and fitness for duty.  Given that human drivers will probably be in the loop for some time to come, I would suggest that as difficult as the transition to more automation has been in the structured and regulated environments we have investigated, it may be even more challenging in a public arena, in which drivers are usually not highly trained and may be fatigued, impaired, distracted, or not medically fit.  Query whether some human drivers may always be in the loop because they would rather not use the automation for various reasons, e.g., they don’t trust it or they simply enjoy driving.
The second way that the NTSB can help relates to collaboration.  The auto industry has already recognized the importance of collaboration, as most recently shown by their collaborative approach regarding autonomous emergency braking.   Our experience with collaboration, especially regarding commercial aviation, may help improve it further. 
The most recent fatal US commercial airliner crash occurred in 2009, and more than once in recent years, the commercial aviation industry has gone years in a row without a single passenger fatality.   Although automation has played an important role in the industry’s continuing safety improvement, much of the industry’s exemplary safety record is attributable to collaboration.  In the early 1990’s, after the industry’s accident rate had been declining rapidly, the accident rate began to flatten on a plateau.  Meanwhile, the Federal Aviation Administration was predicting that the volume of flying would double in 15-20 years.
The industry became very concerned that if the volume doubled while the accident rate remained the same, the public would see twice as airplane crashes on the news.  That caused the industry to do something that, to my knowledge, has never been done at an industry-wide level in any other industry – they pursued a voluntary collaborative industry-wide approach to improving safety.  This occurred largely because David Hinson, who was then the Administrator of the FAA, realized that the way to get off the plateau was not more regulations or a bigger stick for the regulator, but figuring out a better way to improve safety in a complex aviation system.
The voluntary collaborative process, known as CAST, the Commercial Aviation Safety Team, brings all of the players –airlines, manufacturers, pilots, air traffic controllers, and the regulator – to the table to do four things:  Identify the potential safety issues; prioritize those issues – because they would be identifying more issues than they had resources to address; develop interventions for the prioritized issues; and evaluate whether the interventions are working. 
This CAST process has been an amazing success.  It resulted in a reduction of the aviation fatality rate, from the plateau on which it was stuck, by more than 80% in less than 10 years.  This occurred despite the fact that the plateau was already considered to be exemplary, and many thought that the rate could not decline much further.  The process also improved not only safety but also productivity, which flew in the face of conventional wisdom that improving safety generally decreases productivity.  In addition, a major challenge of making improvements in complex systems is the possibility of unintended consequences; yet this process generated very few unintended consequences.  Last, but not least, the success occurred largely without generating new regulations.
As an observer in CAST, the NTSB can help the auto industry determine how much of this aviation industry success story is transferrable to them.  One size may not fit all – for example, the airlines do not compete regarding safety but auto manufacturers do – but the 80% reduction in the fatality rate accomplished by CAST is a powerful example of how much can be accomplished relatively quickly through voluntary collaboration.  Another difference is that the aviation regulatory framework is largely federal, whereas collaboration regarding driverless cars would probably need to include participation by the states.
The third way that the NTSB can inform the process of introducing automation relates to on-board event recorders.  Our investigations are significantly enhanced when we have event recorders to tell us what happened.  Airliners have had “black boxes” for decades, to record both the aircraft parameters and the sounds in the cockpit.  Other transportation modes are increasingly introducing event recorders as well as audio and video recorders. 
Assuming that difficulties will be encountered as automation is being introduced, the more the industry knows from the event recorders about what went right and what went wrong, the more the industry will be able to fashion remedies that effectively address the problems.   Accordingly, consistent with another item on our Most Wanted List – Expand the Use of Recorders to Enhance Transportation Safety – we would encourage the use of robust on-board event recorders to help the process.
Event recorders in other modes of transportation introduced significant issues regarding both privacy and the appropriate uses of recorder data. The NTSB’s sensitivity to those issues has already helped to inform the conversation in commercial trucking, and can inform the process of improving passenger vehicle event recorders as well.
In closing, rather than waiting for accidents to happen with driverless cars, the NTSB has already engaged with the industry and regulatory agencies to help inform how driverless cars can be safely introduced into America’s transportation system.  Our experience in the introduction of automation into human-centric systems, our appreciation of the power of collaboration, and our understanding of the importance of on-board event recorders, all position the NTSB to provide valuable assistance to the process. 
Thanks again for inviting me to speak today.  I would be happy to take any questions.