Mind Matters Natural and Artificial Intelligence News and Analysis
.Businessman holding tablet and management group of people in his hand. Virtual icon of social network. Business technology concept.
Businessman holding tablet and management group of people in his hand. Virtual icon of social network. Business technology concept.
Licensed via Adobe Stock

Asilomar AI Principles: Ethics to Guide a Top-Down Control Regime

Experts agree on a humanistic AI ethics program! Before we break out the champagne, let's ask some serious questions about their assumptions.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Get 1,200 artificial intelligence (AI) researchers and 2,500 other businesspeople and academics, such as Elon Musk, Stephen Hawking, Ray Kurzweil, and David Chalmers, to all endorse one document about AI ethics. Voila! You have the Asilomar AI Principles with serious sound bite power: Experts agree on a humanistic AI ethics program!

Do the Principles advance a worthy cause? To a certain extent, perhaps, in theory. Reading the text of the Asilomar Principles, however, you get a few vague ethical aspirations offered to guide a top-down control regime.

Surveying the Principles’ 23 points, a few stand out as smooth, velvet glove power-grabbers. The points do it subtly, so as the holographic Dr. Lanning advised in I, Robot (2004), “you have to ask the right questions.” 

At least one useful thing a 30-year litigation lawyer learns: People can make a “rational” argument in favor of darn near anything. That isn’t to say advocates always lie, only to say that advocates focus on words, phrases, cultural assumptions and pleasing rhetoric to advance a position. Litigators learn to anticipate how the opposition will interpret words in statutes and documents. 

Looking at the Asilomar Principles through a lawyer’s eye, there appears a nascent trend toward rule by experts combined with centralized government power. Regardless of which side you might take, realizing the truth is paramount. Let’s consider several key points from the Principles.

Point No. 1 states:

The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.

Word choices matter. The word “undirected” is not defined. What is the difference between an undirected intelligence and, say, a directed intelligence? The first clause here explains nothing. Even the term “intelligence” is unmoored.

The second clause implies that “beneficial intelligence” is the opposite of “undirected intelligence.” Yet “beneficial” and “undirected” are not opposites in English. The semantic trick here influences the reader to accept: If the intelligence is directed, then it is beneficial. But of course, who will decide what intelligence is directed, who should direct it, and what exactly makes the intelligence “beneficial?” The answer: the educated experts somewhere in government and academia. 

Point No. 3 states:

There should be constructive and healthy exchange between AI researchers and policy-makers.

This point says little, and seems harmless enough, even obvious. Undercover, however, this point provides a tool to resist and dismiss criticism. If Expert A advocates one viewpoint, and Expert B disagrees with it, then Expert A can invoke Point No. 3 to call the opposing view “not constructive” and “not fostering a healthy exchange.” Deemed even less “constructive” will be outsider non-Expert C’s challenge. Censorship and canceling opposition viewpoints follow from such judgments.

Point No. 7 states:

Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

Point 7 implicitly envisions an AI systems-Experts-Government Complex as the future of justice in society. An “autonomous system” is an AI computer that operates independent of humans, but it is designed and built by experts whom we will never know. The AI system interacts with its designers and operators, giving outputs to inform judicial decision-makers.

When the AI system finds you “guilty” of an alleged crime, its decision flows from factors and computations far beyond your knowledge and comprehension. Even your lawyer may scarcely grasp the technology.

Point 7 demands such systems must provide “a satisfactory explanation” for the legal judgment. Dig deep: What makes an explanation “satisfactory”? In Anglo-American jurisprudence, there is a tradition, sometimes a legal mandate, that human judges provide oral or written rationales for their decisions. Most such decisions are available to the public for understanding and scrutiny. In many forums, an initial decision can be appealed to some separate “higher” authority to examine the legal and/or factual bases for truth and accuracy. Public review and routes of appeal provide substantial assurance that an “explanation” is “satisfactory.”

Under Point 7, however, determining whether an AI system decision is “satisfactory” falls to “a competent human authority.” Who is “competent,” what is the standard of review, and will the AI system decision be “presumed correct” the way federal courts defer to federal agency decisions as presumed correct?

These questions are not merely rhetorical. They need concrete answers before anyone celebrates the Asilomar Principles as guiding and advancing AI justice systems. On its face, Point No.7 assumes the AI system is an expert designed by experts, with more experts later reviewing the results. That approach always means concentration of power in the hands of the “educated expert” class. 

Point No. 10 states:

Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

Point No. 10 opens the barn door of ethical randomness and uncertainty using the sweetest language possible. Skip to the focus: “alignment with human values.” At least three problems arise immediately: (1) Which human values? (2) How are the values selected, and by whom? (3) Are there any values that transcend personal opinions, i.e., can be considered objective and applicable at all times?

Certainly, these three questions need concrete answers before declaring victory over abusive AI. But Point 10 assumes that the designers of AI systems will have the answers, and that they will “ensure” appropriate AI behavior. Ethical decisions and the power to implement them – all in the hands of AI designers, the anointed class.

Point No. 11 states: 

Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

Point 11 follows upon Point 10, stating some “ideals” the AI designers must foster. It sounds good, until one recalls the extensive set of rights “guaranteed” by the constitution of the Soviet Union, a place known for coercion, oppression, prison camps, and democide. If people want liberty with rights that meaningfully limit centralized power, then they need to spell them out. The U.S. Bill of Rights, for example, spells out rights expressly declared so there would be less ambiguity about the limits to central power. Nothing less than absolute limits to power can possibly work. 

Point No. 16 states:

Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

Point No. 16 assures us that “human control” of AI systems will mean that AI will “accomplish human-chosen objectives.” This provision seems redundant, like saying humans will operate motor vehicles to reach human-chosen destinations. 

Reading the literature, however, discovers that Point No. 16 aims to prevent AI systems from deciding by themselves what to do and what goals to seek. Confining AI system action to human directives seems to solve the dystopia-phobic concerns of AI machines enslaving humans and ruling the world.

Hold the champagne and put away the funny hats. It’s too soon to party. The Asilomar Principles envision experts designing and programming AI systems, experts choosing the human values to favor, and experts using AI information to make decisions about the direction of human affairs. The same AI systems-Experts-Government Complex that comprises the designers, lawgivers, and policy makers, will be the very people who will believe the results of AI systems’ calculations and projections.

The AI systems-Experts-Government circularity can scarcely be avoided. The anointed folks who believe in AI as superior to individual humans can scarcely argue against an AI system decision in principle. And if AI systems are presumed to gather and analyze vastly more data more quickly than any humans ever could, then who among the anointed will be positioned to “know better” than the AI? Paraphrasing Darth Vader, “The circle will then be complete.”


Richard Stevens

Fellow, Walter Bradley Center on Natural and Artificial Intelligence
Richard W. Stevens is a lawyer, author, and a Fellow of Discovery Institute's Walter Bradley Center on Natural and Artificial Intelligence. He has written extensively on how code and software systems evidence intelligent design in biological systems. He holds a J.D. with high honors from the University of San Diego Law School and a computer science degree from UC San Diego. Richard has practiced civil and administrative law litigation in California and Washington D.C., taught legal research and writing at George Washington University and George Mason University law schools, and now specializes in writing dispositive motion and appellate briefs. He has authored or co-authored four books, and has written numerous articles and spoken on subjects including legal writing, economics, the Bill of Rights and Christian apologetics. His fifth book, Investigation Defense, is forthcoming.

Asilomar AI Principles: Ethics to Guide a Top-Down Control Regime