In an op-ed at CNS this morning, Walter Bradley Center director Robert J. Marks (right) summarizes his case, as an artificial intelligence expert, that the United States must remain competitive in military AI or, as it is called, “killer robots”:
There is loud opposition to development of killer robots. Chicken Little headlines scream “We’re running out of time to stop killer robot weapons,” and, in all caps, “KILLER ROBOTS WILL START SLAUGHTERING PEOPLE IF THEY’RE NOT BANNED SOON.” United Nations Secretary General António Guterres warns “machines that have … discretion to take human lives are politically unacceptable. [They] are morally repugnant and should be banned by international law.”Robert J. Marks, “Iran Conflict Shows Why the US Needs Autonomous Lethal AI Weapons” at CNS
The difficulty is that the opposition mainly affects the decisions of the United States. It has no impact on the non-democratic nations that permit no opposition and are developing them now:
There are no international laws against killer robots yet. China is investing billions into development of killer robots. Besides, does anyone believe the Islamic Republic of Iran will feel bound by international law?
History shows that technical superiority shortens and wins wars. More importantly, it forestalls conflicts by giving pause to potential adversaries. No one wants to get into a fight they know they will lose.Robert J. Marks, “Iran Conflict Shows Why the US Needs Autonomous Lethal AI Weapons” at CNS
For more resources and media discussions, see
Book at a Glance: Robert J. Marks’s Killer Robots asks the question most commentators want to avoid: What if ambitious nations such as China and Iran develop lethal AI military technology but the United States does not? Many sources (30 countries, 110+ NGOs, 4500 AI experts, the UN Secretary General, the EU, and 26 Nobel Laureates) have called for these lethal AI weapons to be banned. Dr. Marks disagrees: Deterrence reduces violence, he argues.
Killer Robots on the Radio: The issues around AI in warfare seem fairly simple until we look at them more closely. Can we afford to let hostile powers develop AI warfare and not do so ourselves? Artificial intelligence expert Robert J. Marks has been discussing the issue in podcasts of varying lengths, if you want to listen in.