A Slate post by Madeleine Clare Elish explores the role that human drivers or pilots play in semi-automated vehicles. Semi-automated vehicles have features that take over some, but not all, driving tasks.
A simple example would be cruise control, now standardly available in cars, or the autopilot function of modern airliners.
In airliners, autopilots do most of the flying, leaving humans to spend most of their time monitoring the system and waiting for something to happen. Really, argues Elish, their main function is to take the blame in case something bad does occur:
A modern aircraft spends most of its time in the air under the control of a set of technologies, including an autopilot, GPS, and flight management system, that govern almost everything it does. Yet even while the plane is being controlled by software, the pilots in the cockpit are legally responsible for its operation. U.S. Federal Aviation Administration regulations specify this directly, and courts have consistently upheld it. So when something goes wrong, pilots become “moral crumple zones”—largely totemic humans whose central role becomes soaking up fault, even if they had only partial control of the system. Just as the crumple zone in a car is designed to absorb the force of impact in a crash, the human in a highly complex and automated system may become simply a component— accidentally or intentionally—that bears the brunt of the moral and legal responsibilities when the overall system malfunctions.
It is an interesting analogy!
The situation seems rather unfair because, as Elish explains, people are not well suited to the role of simply monitoring semi-automated systems. People are not particularly good at making snap judgments in emergency situations when the automation fails. The situation is made worse by the fact that people are apt to get distracted (or sleep!) while the automation is working fine and are thus not in a position to take control when the automation suddenly turns it over to them.
Moreover, as automation increases in breadth, its human overseers gain less experience. The more a plane flies itself, for example, the less experience its pilots gain in flying it. This automation paradox can be addressed through training, e.g., in simulations. This approach may mitigate the issue somewhat for professional pilots but is unlikely to apply to highly automated cars.
The point, Elish concludes, is that drivers are poised to take an unfair portion of the blame for the negative consequences of partial automation. This outcome is one that should be avoided. Yet, it is reinforced in many current approaches to partially self-driving cars. Tesla, for example, insists that drivers must always be prepared to take control of their Tesla in the event that its autopilot makes a mistake. The Adrian Flux insurance policy for such cars does not cover drivers if they fail to observe this condition.
In effect, this approach uses drivers as a way of deflecting costs that arise with the introduction of automated functions to cars etc. Developers like Tesla might argue that this approach is acceptable because it hastens the appearance of functions that will enhance the overall safety of the roadway as the technology improves. A few crashes today will save many more tomorrow.
Not every developer seems to take this view. In its self-driving car project, Google appears to have given up on the hand-off to unprepared drivers in emergencies. Its latest model of self-driving car has no pedals or steering wheel. It has, in effect, removed that invisible, moral crumple zone. That would seem to resolve the issue of fairness raised by Elish, although it still leaves us to resolve the issue of how to apportion blame to other parties in the event of a crash.