Mind Matters Natural and Artificial Intelligence News and Analysis
81a6d96f-a69e-46d2-b1b1-2ccf7236a11b

Autonomous AI in War: Trial by Ordeal

The more complicated a system becomes, the more difficult it is to analyze all of its actions.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

A key problem of total AI autonomy is testability. Will the AI perform well in all possible contingencies? To answer the question, the AI must be examined and tested. But that is more complicated than it sounds at first.

This is not a problem with simple autonomous defense systems. Consider the land mine. The typical land mine is buried in the dirt. It explodes when a simple pressure sensor is activated. It can be described as “autonomous.” There is no human in the loop to decide whether or not it explodes. The effects are devastating but the action is well known to those who planted it. Little testing is needed because almost all contingencies of its operation are understood. Not so with more complex autonomous systems. The more complicated a system becomes, the more difficult it is to analyze all of its actions.

Autonomous (self-driving) cars can be tested for long distances in an established environment on schedules in a peaceful theater of operation, with humans on board who can interrupt if needed. Except in the unlikely event of terrorism, no one is trying to game the system and make the car crash.

Similar testing is needed for autonomous military weapons. But the situation is more difficult. There are plenty of roads on which to test and tune the self-driving cars, but there are not a lot of wars available in which to test and tune autonomous AI weapons. War games and simulations must suffice. Constructing contingencies is the job of military tacticians. If we seek military superiority to deter aggression, imaginative and creative minds are needed to assess all possibilities.

It would be nice if a super AI could analyze its own shortcomings to the degree needed but computer programs famously lack the ability to analyze themselves. Testing is the responsibility of humans.

So here is the war game: An intelligent enemy will attempt to anticipate what your AI can do and disable it. Throughout history, the enemy has always tried to anticipate a contingency you have not yet considered and use it to make your technology ineffective. Once self-driving car software is established, the design is basically done and no one is trying to kill you. Not so for military applications.

Once you discover enemy measures to disrupt your AI, you begin to develop countermeasures to make the AI effective again. The back-and-forth will continue if unchecked.1 In the Cold War (1947-1991) between the U.S. and the Soviet Union, the measure/countermeasure game was called the Arms Race. Before it ended, as some readers will remember, it featured missiles, antimissiles, antimissile missiles, and even anti-antimissile missiles. The high cost ultimately won the Cold War for the wealthier United States. The totalitarian Soviet Union collapsed into less powerful, independent states in the aftermath.

Banning Technology and Unspilling Milk

Unchecked, an AI arms race could see the development of chilling weapons. As with thermonuclear bombs, there could be no defense against them. This mutually assured destruction (MAD) has actually prevented the use of such weapons. In the Cold War, the Soviet Union knew that a nuclear strike against America would result in the total destruction of most major cities in the Soviet Union. This reciprocal fear necessitated treaties and negotiations that minimized the threat. It’s an uneasy fact, but the horror of nuclear weapons has put a stop to all-out massive land wars like WWI and WWII. After the atomic bomb ended WWII, all superpower wars have been fought with pulled punches. The development of advanced AI weapons might cause a similar check. AI could join the nuclear weapon in the grim gallery of potential horrors that are just too dangerous to use.

So should the development of killer AI be stopped? Those who think so have their heads in the sand. The world may ban poison gas, but there will always be those who don’t play by the rules, as happened last year in Syria (“more than 40 people were killed on 7 April in a suspected chemical attack on Douma,BBC) Nuclear weapons can be banned, but egomaniacs in North Korea and Iran will still try to develop them and threaten to use them on America.

While some people today believe that human beings are inherently good and just need more education, the Judaeo-Christian faith teaches that humans are fallen and have the inherent capacity to be monstrously evil. History has shown again and again that the traditional view is correct. Pollyannas who propose a ban on the development of autonomous killing weapons are looking at their toes rather than at the landscape of reality. Why does totalitarian North Korea engage only in an endless war of words but never nuke America? In the ensuing chain of events, their own country could end up flat, glowing in the dark.

This fear of future flatness will be similar for AI weapons. Developers should focus on their use as countermeasures to AI attacks from the evil jerks in the world. Hopefully, policymakers and international politicians will work to prevent or limit deployment. Given that there has been no use of atomic bombs in war since WWII, this strategy—developing AI weapons, both offensive and defensive with deterrence as the goal—seems to work as well as any alternatives on offer.

1 Military Technology and AI: Past, Present, and Future with with Daniel M. Ogden, J.D. (podcast)

Robert J. Marks is the Director of the Walter Bradley Center for Natural and Artificial Intelligence and holds the position of Distinguished Professor of Electrical and Computer Engineering at Baylor University.

Also by Robert Marks: Why we can’t just ban killer robots Should we develop them for military use? The answer isn’t pretty. It is yes. Autonomous AI weapons are potentially within the reach of terrorists, madmen, and hostile regimes like Iran and North Korea. As with nuclear warheads, we need autonomous AI to counteract possible enemy deployment while avoiding its use ourselves. (Robert Marks)

AI ethics and the value of human life Unanticipated consequences will always be a problem for totally autonomous AI

Killing People and Breaking Things Modern history suggests that military superiority driven by technology can be a key factor in deterring aggression and preventing mass fatalities (Robert Marks)

and

Top Ten AI hypes of 2018


Autonomous AI in War: Trial by Ordeal