Mind Matters Natural and Artificial Intelligence News and Analysis
drones-fly-over-the-island-of-mauritius-in-the-indian-ocean-a-natural-landscape-with-drones-flying-over-it-quadrocopter-stockpack-adobe-stock.jpg
Drones fly over the island of Mauritius in the Indian Ocean. A natural landscape with drones flying over it. quadrocopter

Marks: We Can’t Do Without Autonomous Killer Robots in Combat

As an expert in swarm intelligence, he thinks drone swarms offer specific advantages
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Over a year ago, Robert J. Marks argued in The Case for Killer Robots for developing autonomous military weapons. As an expert in swarm intelligence, he thinks drone swarms should be given priority:

Two battling drone swarms can have numerous agents who, in order to be effective in combat, individually require reaction times in the milliseconds. Humans cannot react quickly enough for one, let alone hundreds, of interacting swarm agents. Autonomous operation can be appropriate.

Marks asks us to picture two gunslingers in the Old West, facing each other on Main Street. The faster draw wins. The second fastest draw is usually dead.

Military strategists call the response to a threat the OODA loop: observe–orient–decide–act.

Swarm conflict, in Marks’s view, is like two teams of gunfighters facing each other. Each side is trying to shoot all the members of the other team. The team with the best OODA loop wins.

The U.S. military seems to agree, according to a recent article in Wired:

LAST AUGUST, SEVERAL dozen military drones and tanklike robots took to the skies and roads 40 miles south of Seattle. Their mission: Find terrorists suspected of hiding among several buildings.

So many robots were involved in the operation that no human operator could keep a close eye on all of them. So they were given instructions to find—and eliminate—enemy combatants when necessary.

Will Knight, “The Pentagon Inches Toward Letting AI Control Weapons” at Wired (May 10, 2021)

Four star General John Murray, who leads the US Army Futures Command, encourages AI autonomous weapons. He asked an audience at the U.S. Military Academy a pointed question last month:

Murray asked: “Is it within a human’s ability to pick out which ones have to be engaged” and then make 100 individual decisions? “Is it even necessary to have a human in the loop?” he added.

Will Knight, “The Pentagon Inches Toward Letting AI Control Weapons” at Wired (May 10, 2021)

Marks told Mind Matters News, “I was impressed when General Murray visited our lab at Baylor. He was not interested in the number of papers a professor published, but what they did in terms of practical importance and reduction to practice.”

Murray has good reason for thinking ahead. In a test last August, AI won a demonstration contest with a fighter pilot easily. If that’s where things are headed, he wouldn’t want to see a situation where only the enemy forces had up-to-date AI.

Marks acknowledges the danger in autonomous AI weapons. But, he says, “There is danger in every weapon. The danger is not in the autonomy, but rather whether the weapon will do what it was designed to do and no more. This requires careful design, extensive testing and end user expertise. These are the hallmarks of any design ethics.”

When US drone swarms are involved, he says, they had better be the faster draw.

Note: You can buy the book in various formats at Amazon but it is free to download here.


You may also wish to read: After Thursday’s dogfight, it’s clear: DARPA gets AI right. In the dogfight Thursday between AI and a pilot, AI won. But what does that mean? By posing relevant questions, DARPA’s overall AI strategy accurately embraces both the capabilities and limitations of AI. (Robert J. Marks)


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Marks: We Can’t Do Without Autonomous Killer Robots in Combat