In case no one knew this, humans are cruel, greedy, and deceptive. We even take advantage of self-driving cars. Our crimes are revealed in a recent study that scolds humans as “unwilling to cooperate and compromise with machines. They even exploit them.”
When you’ve stopped laughing, you might be interested to learn of some intriguing findings from studies of human behavior around self-driving cars (autonomous vehicles) and Prisoner’s Dilemma games. One team of researchers, in a test involving 9 experiments and 2000 participants, tried to determine whether humans would behave as co-operatively with AI systems as we do with fellow humans:
The study which is published in the journal iScience found that, upon first encounter, people have the same level of trust toward AI as for human: most expect to meet someone who is ready to cooperate.
The difference comes afterwards. People are much less ready to reciprocate with AI, and instead exploit its benevolence to their own benefit. Going back to the traffic example, a human driver would give way to another human but not to a self-driving car.
The study identifies this unwillingness to compromise with machines as a new challenge to the future of human-AI interactions.Ludwig-Maximilians-Universität München, “Humans are ready to take advantage of benevolent AI” at ScienceDaily The paper is open access.
The researchers have a theory:
“Cooperation is sustained by a mutual bet: I trust you will be kind to me, and you trust I will be kind to you. The biggest worry in our field is that people will not trust machines. But we show that they do!” notes Prof. Bahador Bahrami, a social neuroscientist at the LMU, and one of the senior researchers in the study. “They are fine with letting the machine down, though, and that is the big difference. People even do not report much guilt when they do,” he adds.Ludwig-Maximilians-Universität München, “Humans are ready to take advantage of benevolent AI” at ScienceDaily The paper is open access.
And they worry:
If people think that AI is programmed to be benevolent towards them, they will be less tempted to co-operate. Some of the accidents involving self-driving cars may already show real-life examples: drivers recognize an autonomous vehicle on the road, and expect it to give way. The self-driving vehicle meanwhile expects for normal compromises between drivers to hold.Ludwig-Maximilians-Universität München, “Humans are ready to take advantage of benevolent AI” at ScienceDaily The paper is open access.
Just a minute. The self-driving vehicle doesn’t “expect” anything. It is a vast chain of algorithms, let loose in an environment of conscious humans who are constantly generating new approaches to whatever is happening, mindful of their fellows.
So, of course, the self-driving vehicle gets shoved aside. Why should we care more about it than about a giant can opener? To get any response, researchers would need to let other drivers know that there are people on board the self-driving vehicle. (Whether it’s wise to be on board or not is another matter.)
We are told that the study’s senior author, Professor Ophelia Deroy, “also works with Norway’s Peace Research Institute Oslo on the ethical implications of integrating autonomous robot soldiers along with human soldiers.”
Well, is it realistic to expect the human soldiers to care much what happens to the robot soldiers either, apart from safety and recycling issues afterward?
The researchers warn, “For society as a whole, it could have much bigger repercussions. If no one lets autonomous cars join the traffic, they will create their own traffic jams on the side, and not make transport easier.”
One solution is to use fewer autonomous cars then. Almost any solution to traffic problems will be easier than changing human nature.
You may also wish to read:
Artificial intelligence slams on the brakes. The problem of autonomous cars suddenly slamming the brakes is becoming well known and it has no known fix. The government–industry consensus produces imperfect systems that may endanger the public, which usually has little input into the policies. (Richard W. Stevens)