Mind Matters Where Natural and Artificial Intelligence Meet

AI is indeed a threat to democracy

But not in quite the way historian Yuval Noah Harari thinks
googleplus Google+
arroba Email

Historian Yuval Noah Harari argues at The Atlantic that artificial intelligence may subvert democracy by concentrating power in a small elite:

The emergence of liberal democracies is associated with ideals of liberty and equality that may seem self-evident and irreversible. But these ideals are far more fragile than we believe. Their success in the 20th century depended on unique technological conditions that may prove ephemeral.

There is indeed an association between democracies and liberty but the arrow of causation may travel in a different direction than Harari assumes. While democracy has coincided with political liberty in the West over the past century or two, it is indisputable that, historically, tyranny has also emerged from democratic institutions.

Vladimir Ilyich Lenin (1870-1924) was leader of the first communist state, the Soviet Union

Totalitarianism seems to need the soil of democracy soil to germinate. Bolshevism arose from the democratic Kerensky government and Nazism arose from the democratic Weimar Republic. Mao’s tyranny emerged from Sun Yat-sen’s republican China. It seems unlikely that Lenin, Hitler, or Mao would have arisen had the Czar, the Kaiser, or the Emperor remained in power. Under traditional autocrats, the fate of cunning totalitarian demons was a rope, not a podium. The Czar, after all, hanged Lenin’s brother. But Kerensky simply folded before Lenin.

Thus Harari seems to misunderstand the relationship between democracy, liberty, and tyranny. Tyranny—at least, the totalitarian form of it—is not the opposite of democracy. Tyranny is the scion of democracy—democracy’s offspring, not its antithesis. The important question, yet to be satisfactorily answered, is this: Is tyranny democracy’s bastard or its true heir? To what extent is tyranny the inevitable spawn of democracy? Plato may have understood better: He proposed that democracy led quite naturally and inevitably to tyranny. We moderns seem blissfully unaware of the natural, and seemingly inevitable, fate of republics.

It is with Plato’s warning in mind that we should approach Harari’s main point—that technology in general, and artificial intelligence in particular—lets loose the hounds of tyranny. That may not be because technology subverts democracy but because technology empowers democracy.

Yuval Noah Harari

Technology, and particularly AI, makes democratic government immediate and pervasive. It invites the electorate—the mob— into your living room. Any genius can capture the thoughts of billions of people with mere keystrokes. But then any fool can also write an incendiary blog post that is available worldwide instantly. An aspiring totalitarian need not rent a stadium or evade the Kaiser’s police. He merely needs a Google account and some spare moments, and he can reach you anywhere you live.

Technology endangers liberty because it advances liberty. Libertarian democracy is unstable. Technology leverages this instability because it leverages man’s intellect and will. Modern technology empowers man in ways that are utterly unprecedented. For example, the Arab Spring was the consequence largely of the internet, which enables instantaneous dissemination of ideology. It ultimately empowered Islamism, not humanism. But it empowered Islamism not despite democracy, but via democracy. The mob demanded change, and the imams served it up.

What is it about technology, and specifically about artificial intelligence, that potentiates tyranny? Harari understands some of it. He raises economic issues such as displaced workers and rising elites, which are genuine consequences of modern technology. But they are not the core issue. Harari glimpses the root of AI’s transformative power in the advent of self-driving cars:

Two particularly important nonhuman abilities that AI possesses are connectivity and updatability… For example, many drivers are unfamiliar with all the changing traffic regulations on the roads they drive, and they often violate them. In addition, since every driver is a singular entity, when two vehicles approach the same intersection, the drivers sometimes miscommunicate their intentions and collide. Self-driving cars, by contrast, will know all the traffic regulations and never disobey them on purpose, and they could all be connected to one another. When two such vehicles approach the same junction, they won’t really be two separate entities, but part of a single algorithm. The chances that they might miscommunicate and collide will therefore be far smaller. Yuval Noah Harari, “Why Technology Favors Tyranny” at The Atlantic

Harari is right about the connectivity and updatability of AI. But we can also discern a more profound and insidious consequence in his example. Imagine two men walking in a crowd. They avoid colliding by deliberately watching and gauging each other’s movements—they make eye contact and assess body motion and the direction of steps. If the men are driving cars, they obey traffic laws and use turn signals to avoid a collision. If they are in self-driving cars, the algorithm avoids the collision—the men do nothing at all but sit in the car. At each stage of advancing technology, the men are less and less aware of each other and of the measures needed to avoid a collision. In the self-driving car, they are completely unaware of the anti-collision measures. They are blind to the algorithm.

Each technological advance makes collision less likely—the men are less likely to collide while driving cars than they are to collide in a crowd—and less likely still to collide in self-driving cars. But with each technological advance, the collision avoidance mechanisms are less apparent.   In fact, it is likely that no one person fully understands in detail all the factors entailed in avoiding collisions in self-driving cars. The collision avoidance is the result of the engineers and manufacturers of the cars, the hardware designers and software developers of the processors, the lasers, the radar, the cameras, and the sonar. AI increases efficiency and effectiveness at the cost of obscurity.

It is the obscurity of AI that most impairs liberty. We do not know what is being done to us or even what is being done by us. What algorithms does Google use when we search on political topics? We don’t know. It is inevitable that such searches are biased, perhaps deliberately, perhaps not. But the bias is unknown to us, and perhaps unknown even to Google, and the obscurity grows by the year. Google searches may (and likely do) tend to favor certain political views.

It is not far-fetched to imagine self-driving cars “choosing” routes that go past merchants who “advertise” surreptitiously, using the autonomous vehicles. How much would Mc Donald’s pay to route the cars and slow them down when they pass the Golden Arches? How much would a political party pay to skew a Google search on their candidates? It is likely that searches are skewed in ways that are not deliberate as well. The unfathomable layers of complexity in contemporary AI make objectivity and balance nearly impossible to ensure and enforce.

The most dangerous aspect of AI to our liberty is the obscurity inherent to it. AI blinds us to motives and processes.

Fractal fist iconThe second danger of AI, which follows on the first and enormously magnifies it, is contagion. AI provides boundless instantaneity and dissemination of ideas. I can type a sentence and (in principle) have it read on every computer on earth in less than a second. I can praise my heroes and denounce my enemies instantly and without geographical bounds. Flash mobs are inherent to AI, and there is no practical limit to their immediacy, size, or fervor. Social media provides us with virtual mobs, and not infrequently physical ones, within seconds or minutes.

The economic and social impacts of technology, as Harari points out, are profound and of great interest. But the primal danger that AI poses to humanity is deeper and more insidious than economics. AI changes our psyche, individually and collectively. AI obscures the forces acting on us and the internet makes our reactions to them go viral.

Obscure contagion—rapid, wide dissemination of ideas we don’t understand—is the prime threat AI poses to humanity. It is an existential threat to human dignity and flourishing. And this threat is made graver, not less grave, by our democracy, which, as Plato understood, is the necessary soil of tyranny.

It is not clear that we can avoid our fate, which seems to be rushing at us faster and faster, as any perceptive observer of the contemporary world and national affairs can attest. If we are to retain our humanity and our liberty, we must understand what is happening to us. We must understand the unprecedented force kindled by artificial intelligence. That force is the obscurity and fierce contagion of ideas.

Note: The essay “Why Technology Favors Tyranny” is adapted from Yuval Harare’s most recent book, 21 Lessons for the 21st Century

Michael Egnor is a neurosurgeon, professor of Neurological Surgery and Pediatrics and Director of Pediatric Neurosurgery, Neurological Surgery, Stonybrook School of Medicine