John McDermid believes that self-driving cars will eventually outperform humans at simple driving tasks. But the University of York professor also thinks that the notion of “moral” self-driving cars—cars that can make ethically correct decisions in a crisis is off-base, certainly for the foreseeable future. First, the car would need to think like a human being (“general artificial intelligence”) to make moral decisions and we are surely a long way from that.
But he notes other problems as well. First, there is no general agreement worldwide as to the correct moral position in difficult cases. When Harvard and MIT researchers asked 40 million people in 200 countries around the world to choose who to let the car kill when a choice must be made, significant cultural differences emerged:
The results revealed three cultural clusters where there were significant differences in what ethical preferences people had. For example, in the Southern cluster (which included most of Latin America and some former French colonies), there was a strong preference for sparing women over men. The Eastern cluster (which included many Islamic countries as well as China, Japan and Korea) had a lower preference for sparing younger people over older people. John McDermid, “Self driving cars can never be moral” at Fast Company
If the people who program the cars can’t agree on such values, how will the cars be programmed? In any event, the scenarios offered in the study are typical “ethics seminar” stuff that bears little resemblance to the deadly split second of a real-life crash, for example, “who the car should kill if its brakes failed: its three passengers (an adult man, an adult woman, and a boy) or three elderly pedestrians (two men and one woman)”
Any discussion of the morality of the self-driving car should touch on the fact that the industry as a whole thrives on hype that skirts honesty. As Jonathan Bartlett likes to say, “Guess what? You already own a self-driving car”! He means that, given the very loose criteria often used, many average cars we drive today would qualify. But if we apply meaningful criteria instead, self-driving vehicles are just around the corner… on the other side of a vast chasm.
Another question Bartlett raises is, who assumes moral responsibility for mishaps involving self-driving cars? The cars raise the same problem as do other types of machine learning: The machine isn’t responsible, so who is? That gets tricky… Companies may say one thing about their smart new product in the sales room and another in the law courts after a mishap.
The European Parliament has proposed making robotic devices legal persons, for the purpose of making them legally responsible. But industry experts have denounced the move as unlikely to address real-world problems. McDermid thinks we should forget trying to make cars moral and focus on safety instead: “Currently, the biggest ethical challenge that self-driving car designers face is determining when there’s enough evidence of safe behavior from simulations and controlled on-road testing to introduce self-driving cars to the road.”
See also: Self-driving cars hit an unnoticed pothole “Not having to intervene at all”? One is reminded of the fellow in C. S. Lewis’s anecdote who, when he heard that a more modern stove would cut his fuel bill in half, went out and bought two of them. He reckoned that he would then have no fuel bills at all. Alas, something in nature likes to approach zero without really arriving…
AI Winter is coming Roughly every decade since the late 1960s has experienced a promising wave of AI that later crashed on real-world problems, leading to collapses in research funding. (Brendan Dixon)