Mind Matters Natural and Artificial Intelligence News and Analysis
Thinking robot
Thinking robot. Clipping path included. 3D illustration

Can AI make us better human beings?

Helping us believe that is a promising new business area for some
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Recently, we’ve been hearing claims that robots are learning to care about things and promises that “Technology will make emotional intelligence your superpower.

Some are investing in the idea. Cogito offers “real-time conversational guidance” via an AI platform that helps call center employees sound sympathetic, by providing constant feedback about emotional signals. Well, as comic actor George Burns once said, “To be a fine actor, when you’re playing a role you’ve got to be honest. And if you can fake that, you’ve got it made.” There is also a virtual team-building coach called Coachbot.

And it gets personal too. One company, Muse, promises to automate your meditation via a a brain-sensor headband: “When I switch it on, a calm Canadian voice tells me: “Muse is now listening to your brain signals.” (The Guardian, April 1, 2019). If the wearable Canadian doesn’t completely relax you, perhaps the information that the company can save your brain data for further use will do the trick.

But what about the underlying claim that constant AI monitoring can help us become more compassionate or mentally healthy? A number of questions are worth considering:

● To the extent that compassion is a moral choice (that is, not just a manipulation), a key issue is that there is no universal moral machine. As Brendan Dixon found, the “Moral Machine” project, aimed at producing righteous self-driving cars, revealed stark differences in global values. The 2.3 million participants in Moral Machine worldwide were trying to be ethical; they differed in their specific choices as to who to kill and who to spare. Automated compassion will likely meet the same fate.

● Silicon Valley might not have as much to teach the rest of us about peace and compassion as it hopes. Recently, a high-tech firm had to abandon a peaceful code of conduct (the Rule of St. Benedict, used by monks for many centuries) because it sparked rage.

● Then there’s the ever-fresh wave of privacy scandals around AI. Never mind that your phone knows everything now and is selling your secrets. It’s also come to light recently that Facebook has been paying for contract workers in India and elsewhere to comb through private posts going back to 2014, to see how trends have changed and develop new business areas:

The Wipro workers said they gain a window into lives as they view a vacation photo or a post memorializing a deceased family member. Facebook acknowledged that some posts, including screenshots and those with comments, may include user names. Munsif Vengattil, Paresh Dave, “Facebook ‘labels’ posts by hand, posing privacy questions” at Reuters

You thought all that grief was private, did you?

Part of the problem, as George Gilder puts it in Life after Google, is that when we are not paying for the service, it’s because we are the product.

● Lastly, we haven’t even gotten into issues like killerbots and slaughterbots. Do we trust our fearless high tech leaders to make the decisions about deliberate destruction?

A law professor who co-directs the Berkman Klein Center for Internet & Society at Harvard warns against letting the AI industry make ethical decisions for us, urging us to “fight back”:

Inside an algorithmic black box, societal biases are rendered invisible and unaccountable. When designed for profit-making alone, algorithms necessarily diverge from the public interest — information asymmetries, bargaining power and externalities pervade these markets. For example, Facebook and YouTube profit from people staying on their sites and by offering advertisers technology to deliver precisely targeted messages. That could turn out to be illegal or dangerous. The US Department of Housing and Urban Development has charged Facebook with enabling discrimination in housing adverts (correlates of race and religion could be used to affect who sees a listing). Yochai Benkler, “Don’t let industry write the rules for AI” at Nature

One alternative is that someone will develop AI software that keeps an eye on the AI software that keeps an eye on the people who make the decisions. Given the history, outsourcing the business of becoming a better human to AI does not sound promising.

See also: Can we program morality into a self-driving car?


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Can AI make us better human beings?