Rapid technological development has made self-driving cars a reality. This advancement raises questions about how these cars should make ethical decisions in place of human drivers. While technology can replace, and will undoubtedly supersede humans in actual driving ability, driving a car involves moral decisions. These choices would have to be programmed—for instance whether to collide with another vehicle or swerve towards pedestrians in a crash. Such a circumstance would likely be exceedingly rare, but this, and similar situations, still need to be considered as we move towards a future without a human, in both the literal and metaphorical driving seat.
A study conducted by the Massachusetts Institute of Technology (MIT) was one of the first large-scale investigations of public attitudes towards ethical quandaries posed by self-driving cars. In the “Moral Machine” experiment, researchers created a game based on the famous “trolley problem.” Participants had to choose the most favourable result from a series of rather gloomy outcomes in a hypothetical car crash. The announcement of the experiment was a viral hit and led to millions of people from over 200 countries contributing. This made the experiment one of the largest ever studies conducted on moral preferences in populations.
The “Moral Machine” experiment investigated nine different criteria including whether a self-driving car should prioritise passengers over pedestrians, people over pets, and youth over the elderly. Results were gathered by asking users questions such as: should the car continue for‐wards and hit a child, or swerve and hit an old lady? Here, people had to consider both the ages of the potential victims along with the moral implications of changing the car’s trajectory.
After four years, key results of the study were published in the journal Nature, focusing on differences in moral views between countries. Countries in close geographical proximity and those with a similar culture and economy were likely to have closely aligned views. Indeed, three dominant clusters of “moral alignment”‘ were seen: the West, the East, and the South.
One prominent example in highlighting this geographical divide was the spread of countries that were more likely to favour saving the young over the old. France was the country most skewed towards sparing youth, with many European countries such as Sweden and Germany, as well as the USA and Canada sitting above the average global preference for sparing younger lives. On the other hand, Japan, China, and Taiwan had a greater preference for saving older lives. A similar split was seen in different countries’ propensities towards saving the maximum number of people. More “individualistic” cultures (typically seen in Western countries) were more likely to prefer saving as many lives as possible.
Interestingly, when looking at people’s preference for saving pedestrians over passengers whilst ignoring other factors such as age, these clusters seemed to break down. Japanese and Greek people had on average a much stronger tendency to avoid hurting pedestrians at the cost of the car’s passengers than the Chinese and French participants did.
These findings could have major implications for manufacturers of self-driving cars. The research also suggested that people in less economically developed countries were more tolerant of pedestrians crossing improperly (“jay‐walking”). Economic differences such as these were perhaps the most notable with participants in countries with higher levels of economic inequality showing greater gaps in their treatment of individuals based on their socioeconomic status. This led Edmond Awad, a researcher involved in the study, to point out that policy should not necessarily just reflect public opinion: “It seems concerning that people found it okay to a significant degree to spare higher status over lower status. It’s important to say, ‘Hey, we could quantify that’ instead of saying, ‘Oh, maybe we should use that.’”
Critics of the study have pointed out that the posed hypothetical situations often seemed contrived. Having a binary choice to make in a car crash, as was the case in the experiment, is exceedingly unlikely and indeed the number of crashes happening at all would be expected to drop with increased use of self-driving cars. Still, regardless of the direct usefulness of the data, it seems clear that the ”MoralMachine” experiment has kick-started a dialogue surrounding ethics which is necessary not only with regards to self-driving cars but within the field of artificial intelligence in general. When technology meets philosophy and public policy, answers are unlikely to be obvious even though our lives and consciences will depend on them.
This article was first published in our Michaelmus Term 2019 Issue: Perspective
Artwork by Emma Brass