Autonomous vehicles carry a promise, so we read, of freedom from the cost burden of individual ownership and the traffic problems caused by so many vehicles on the road.

But autonomous vehicles sometimes fail, resulting in death and injury. Sadly, we are aware of a recent incident where a cyclist was struck by an autonomous vehicle which was being tested by Uber. Not long ago, a Tesla driver died when the car ran under a turning semi-trailer.

Our human reaction to these events is to blame the technology which is supposed to be in control, and to predict that autonomous vehicles are an impossible dream. Authorities and those responsible for the vehicle investigate, but we are not convinced: Autonomous vehicles are dangerous – even when the cause is proven to be human error, as was unequivocally the case with the Tesla crash!

Human controlled vehicles are involved in a vast array of events that cause injury and loss of life too. We tend more to accept these events because we know that the human element is not perfect and can make mistakes. We can allow human error, but not machine error and certainly not the lack of machine override to compensate for the human element!

Prior to autonomous vehicles, we made immense progress over decades in reducing the road toll through rigorous examination of crash events and use of the knowledge gained to drive improvement in vehicle design, road construction and driver behaviour. But the feedback loops were tedious, and improvement took years.

The aviation industry faced a similar problem in its early days. Early commercial aircraft were flimsy, unreliable, and difficult to control. They crashed with monotonous regularity. It became evident that aviation could only fulfil its promise if a robust approach to safety could greatly reduce, or perhaps eliminate the likelihood of mishap.

Aircraft manufacturers, component suppliers, airlines and authorities cooperate in an ongoing near-global system of rigorous assessment, not just of crashes, but of every event in which something goes wrong. Every incident is subject to a formal report and lessons learned from the investigation are fed back into the system. Action is prompt and decisive.  Aircraft fleets can be grounded.  Manufacturers prescribe additional maintenance, modification and preventative measures. Authorities change rules and apply restrictions to the allowable use of particular designs. Pilots, crew, and ground personnel are retrained and retested.

For the most part, aviation has removed ego from its operating model. Failure analysis looks at all possible factors and even the ubiquitous pilot error is generally pursued to the deeper causes that trigger pilot error, including cultural issues and excessive workload.

Transitioning to a world of autonomous vehicles requires rigour similar to and perhaps exceeding that of aviation. Knowledge gained through robust analysis of autonomous vehicle failures must not be regarded as privileged property of the entities which developed the vehicle – it must be shared rapidly across all interested parties so that the likelihood of failure in the future can be inexorably reduced.

Google/Alphabet, Uber, General Motors, Volvo, Baidu, and all other entities developing autonomous vehicles must, in conjunction with national authorities, establish a global network that analyses all autonomous vehicle incidents and transparently reports, without bias, on the learnings from every event.

Only by doing this can they address key principles in the international standard for governance of technology use, which we at the Digital Leadership Institute regard as a powerful guide for our digital future. For example, the Performance Principle requires us to ensure that our use of technology delivers required performance, and the required performance for autonomous vehicles is, unequivocally, for zero deaths and zero injury. The Conformance Principle requires not just conformance with formal rules, but development and amendment of rules to properly govern the emerging use of autonomous vehicles, including rules for accountability and liability.

Autonomous Expectations
Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Join our mailing list
ErrorHere