Mind Matters Natural and Artificial Intelligence News and Analysis
internet-law-concept-stockpack-adobe-stock
Internet law concept

Can a Robot be Arrested and Prosecuted?

An Uber driver is held liable if he runs over someone. But what if a driverless taxi ran over someone?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

The title, “Can a Robot Be Arrested? Hold a Patent? Pay Income Taxes?”, is bound to attract clicks and attention. Posted on the IEEE Spectrum site, a podcast transcript by that name reports Steven Cherry’s interview of Ryan Abbott about artificial intelligence and the law. Abbott, a physician, lawyer, and professor, wrote the aptly titled book, The Reasonable Robot: Artificial Intelligence and the Law, published by Cambridge University Press in 2019.

To the point: Can a robot be arrested? Technically, an arrest occurs when a person is forcibly but lawfully detained. Of course, one can forcibly detain a robot – we’ve seen that done in many science fiction movies. Abbott was talking specifically about how criminal law should apply to actions taken by artificial intelligence (AI) powered machines and systems. The better question is: Does it make sense to apply criminal laws to the actions of AI systems?

Robot Drivers

Abbott presents a simple situation to consider the criminal liability of AI systems: the driverless taxi. First, he suggests: “If an Uber runs me over, that may be a [civil case], but it’s a criminal law [case] if the person driving the Uber was trying to run me over.” The law treats the situation as civil “negligence” when the harm results from the driver’s failing to act reasonably and take reasonable care. The same situation is deemed “criminal” – a serious felony – when the driver either acted recklessly or actually intended to cause that kind of harm. 

What if a driverless taxi runs over a human? We might just assume the AI system either malfunctioned or just wasn’t programmed adequately. That assumption doesn’t cover all possibilities, however. It is possible to program a pilotless drone missile to target and destroy an occupied building. Indeed, it is standard practice to program unmanned military devices to kill people and destroy things. 

Therefore, a taxi could conceivably be programmed to run down people on occasion. Or a driverless taxi could be directed by human electronic messages to run down people or crash into targets. We must think about the possibilities of criminal robots.

Blame Follows Mental State

Abbott points out that criminal law cares not only about what was done, but why it was done. Most crimes contain a mental state element. If you hit a person with a bat, the question is: what was your mental state? 

  • Did you know what you were doing? (Knowledge
  • Did you intend your action? (General intent
  • Did you intend to cause the harm with your action? (Specific intent
  • Did you know what you were doing, intend to do it, know that it might hurt someone, but not care a bit about the harm your action causes? (Recklessness)

Criminal law looks to identify the actor, decide whether the actor is blameworthy, punish the wrongful actor, and deter that actor and anyone else from committing the same wrong. When a driverless AI car hits a person, for example, who is the wrongful actor? 

Abbott points out in Reasonable Robots there may be no identifiable person(s) who can be directly blamed for AI-caused harm. Potentially hundreds of people worked on aspects of the computer hardware and software in the AI vehicle, with many others involved in maintenance and repair after the vehicle begins its service. An analogy to corporate criminal liability arises: Prosecuting a corporation for illegally dumping toxic waste would make sense when no one person can be charged individually for the crime.

Not emphasized in Abbott’s thinking is a category difference between a civil harm and a criminal act. Traditionally in secular governmental systems, a crime is misconduct committed against society as a whole. Murder is basically the act of intentionally killing another human being without justification. That act is treated as an attack on the public. A person willing to kill a fellow human without justification presents a constant mortal danger to everyone. Acts of robbery, theft, rape, kidnapping, arson, assault, battery, and more, are also felony crimes because their seriousness makes them intolerable in society. Prosecuting crimes aims not to recover money damages for victims – it aims to protect society by stopping the criminal, teaching him or her a lesson, and deterring others from committing the same crime. 

So, the question must always be asked when a robot or AI system physically harms a person or property, or steals money or identity, or commits some other intolerable act: Was that act done intentionally? Programmers sometimes refer to software “bugs” as “undocumented features.” All smirks aside, software code designed to steal, cause damage, or even kill can lurk in AI systems because a programmer placed that undocumented feature there. Society cannot unquestioningly accept “the computer malfunctioned” whenever an AI system causes harm.  

“Robot” by photobankmd

Forensic Investigation and Affixing Responsibility

Abbott argues for a Principle of Legal Neutrality. Under the Principle, the behavior of AI systems is evaluated approximately in the same way as the same behavior committed by a human. That means treating a “crime” committed by AI systems the same as crimes committed by humans. Any human death resulting from AI conduct needs investigating to see if it was truly an “accident,” or if it resulted from intentional or criminally reckless conduct.

Easy to say, not easy to do. Forensic analysis of complex software, especially when it ties into a network of other AI systems and data sources, is painfully difficult. The recently reported challenges of trying to detect computer-aided vote fraud in the 2020 election make this reality clear. Software can act improperly because of its internal design or because of external data entering the system. And sometimes intermittent, non-reproducible hardware malfunctions occur to scramble software operation.  If external data came through a “back door” access to the AI system, investigators may never find who sent the message to cause the AI system misfunction.

In Reasonable Robots, Abbot points out that criminals can commit serious crimes and never be identified, let alone prosecuted and punished:

“[T]here may be times where it is not possible to reduce AI crime to an individual due to AI autonomy, complexity, or limited explainability. Such a case could involve several individuals contributing to the development of an AI over a long period of time, such as with open-source software, where thousands of people can collaborate informally to create an AI.

If cybercriminals cannot be identified or effectively stopped and punished by the legal system, however, then AI systems can be devised to blunt cybercrime or even counterattack the attackers. Self-defense is the fastest response to street crime and home invasions, so likewise cybercrime defense at the local computer system level would be the quickest “first responder.”

Whom to Penalize and How?

Criminal penalties imposed on guilty persons typically include monetary fines or loss of liberty (prison or detention). We can’t fine or imprison an offending driverless car or a defrauding scam bot. Killing a machine might have little effect on the humans behind those machines – or it might cost a person or corporation millions of dollars. Either way, society urgently needs to prevent or deter the crimes, or penalize the people who commit them. What can work against AI crimes?

Reasonable Robots devotes several pages to advocating the “punishment of AI” but ultimately describes no practical way to do that. More effective penalty approaches would hold liable anyone who contributed in any substantial way to the AI crime. If the designers and implementers of the software can be found, they would be defendants. The people who hosted the AI system on their computers and servers would also be defendants. 

Because proving that these people had “intent” for the AI system to commit the crime would be difficult or impossible, the standard for liability could be reduced to “negligence.” There currently exist felony crimes where showing the defendant may not have had intent but was only negligent is enough for a conviction. (Negligent homicide is one.) By prosecuting and punishing people involved with AI systems for conduct so harmful that it amounts to a felony violent crime or serious financial or property crime, there would be huge incentives for people to have nothing to do with potentially criminal AI systems.

Many writers weigh in on the dangers of AI gaining enormous power and wreaking havoc with humanity. That risk is uncertain, but there is no reason why an AI system’s killing of a human being or destroying people’s livelihoods should be blithely chalked up to “computer malfunction.” Developing a legal culture broadly holding people accountable for AI system crimes – now, while AI is still containable – can help ensure against an even hypothetical future AI apocalypse.   


Richard Stevens

Fellow, Walter Bradley Center on Natural and Artificial Intelligence
Richard W. Stevens is a lawyer, author, and a Fellow of Discovery Institute's Walter Bradley Center on Natural and Artificial Intelligence. He has written extensively on how code and software systems evidence intelligent design in biological systems. He holds a J.D. with high honors from the University of San Diego Law School and a computer science degree from UC San Diego. Richard has practiced civil and administrative law litigation in California and Washington D.C., taught legal research and writing at George Washington University and George Mason University law schools, and now specializes in writing dispositive motion and appellate briefs. He has authored or co-authored four books, and has written numerous articles and spoken on subjects including legal writing, economics, the Bill of Rights and Christian apologetics. His fifth book, Investigation Defense, is forthcoming.

Can a Robot be Arrested and Prosecuted?