Mind Matters Natural and Artificial Intelligence News and Analysis
olav-ahrens-rotne-1100599-unsplash
Photo by Olav Ahrens Røtne on Unsplash

AI, it turns out, can solve any problem

As long as we are not too persnickety about what we consider a solution
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

An anonymous poet complains,

I really hate this darn machine;
I wish that they would sell it.
It won’t do what I want it to,
but only what I tell it.

The machine that knows what we mean instead of what we say is still in the concept stage. Meanwhile, Deep Mind researcher Victoria Krakovna keeps a running list of ways that generate “a solution that literally satisfies the stated objective but fails to solve the problem according to the human designer’s intent.” Here are some examples from her spreadsheet:

Evolved algorithm for landing aircraft exploited overflow errors in the physics simulator by creating large forces that were estimated to be zero, resulting in a perfect score (1998)

A robotic arm trained to slide a block to a target position on a table achieves the goal by moving the table itself. (2018)

[Virtual] Creatures bred for speed grow really tall and generate high velocities by falling over (1994)

Agent kills itself at the end of level 1 to avoid losing in level 2 (2017)

Self-driving car rewarded for speed learns to spin in circles (2017) Victoria Krakovna, “Specification Gaming Examples in AI” at Victoria Krakovna World

Some readers may notice an uncanny resemblance between the AI-generated solutions submitted and those adopted by unwieldy bureaucracies (moving the goalposts, generating lots of useless action, spinning in circles… ) The list has a serious purpose, however; it is intended to provide a resource for AI safety research and discussion.

Krakovna also co-founded the Future of Life Institute, a non-profit organization “working to mitigate technological risks to humanity and increase the chances of a positive future.”

Note: The anonymous poem quoted above is from a collection of sayings by programmers, including “There are two ways to write error-free programs; only the third one works. ~Alan J. Perlis.”

Hat tip: Eric Holloway

See also: Did AI teach itself to “not like” women? No, the program did not teach itself anything. But the situation taught the company something important about what we can safely automate.

Why can’t machines learn simple tasks?

and

Do machines or brains really learn?


AI, it turns out, can solve any problem