Mind Matters Natural and Artificial Intelligence News and Analysis
ring-fire-in-black-stockpack-adobe-stock.jpg
Ring fire in black

AI profs: Beware “Black Ball” Tech That Could Destroy the Planet

Oxford Future of Humanity researchers contemplate a technology with immense destructive powers that is easy to access and use
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Nick Bostrom and Matthew van der Merwe of Oxford’s Future of Humanity Institute offer a sticky question: What if we invented a “black ball” technology, one that destroyed human civilization?

In the wake of Hiroshima, many people predicted that nuclear technologies would destroy the world. Albert Einstein is purported to have said, “I do not know with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.”

However, say Bostrom and van der Merwe, to make nuclear technology work for you, you need to be a nuclear physicist. One might add that radioactive materials also “send messages.”

Figuring out what untrusted actors are doing with nukes did not prove to be as hard as feared. For example, Israel destroyed Iraq’s nuclear aspirations in 1981 and Syria’s in 2007 via conventional attacks. Factors like these have prevented nuclear weapons from becoming a simple “black ball” technology — although they continue to represent a serious danger in principle.

Bostrom and van der Merwe ask us to consider the principle behind their so-far theoretical black ball: A technology that any aggrieved crank, let alone terrorist, can easily acquire and use to wreak huge havoc. What if, for example, an aggrieved individual could have made a nuclear weapon, undetected, in a basement workshop (they call it “easy nukes”)?:

Given the diversity of human character and circumstance, for any imprudent, immoral or self-defeating action, there will always be some fraction of humans (‘the apocalyptic residual’) who would choose to take that action – whether motivated by ideological hatred, nihilistic destructiveness or revenge for perceived injustices, as part of some extortion plot, or because of delusions. The existence of this apocalyptic residual means that any sufficiently easy tool of mass destruction is virtually certain to lead to the devastation of civilisation.

Nick Bostrom and Matthew van der Merwe, “How vulnerable is the world?” at Aeon (February 12, 2021)

They call this idea the “vulnerable world” hypothesis. They ask, how do we strategize around the eventual emergence of a simple but deadly technology? And they contemplate more surveillance and global government as a possible solution:

Yet as difficult as many of us find them to stomach, stronger surveillance and global governance could also have various good consequences, aside from stabilising civilisational vulnerabilities. More effective methods of social control could reduce crime and alleviate the need for harsh criminal penalties. They might foster a climate of trust that enables beneficial new forms of social interaction to flourish. Global governance could prevent all kinds of interstate wars, solve many environmental and other commons problems, and over time perhaps foster an enlarged sense of cosmopolitan solidarity. Clearly, there are weighty arguments for and against moving in either direction, and we offer no judgment here about the balance of these arguments.

Nick Bostrom and Matthew van der Merwe, “How vulnerable is the world?” at Aeon (February 12, 2021)

Actually, nothing would be more likely to trigger attempts to destroy vast swathes of one’s surroundings than full-on Big Government backed up by constant surveillance. That environment would sponsor a nothing-to-lose mentality in far more people than the number who experience that state of mind now.

Ring fire in black

The authors seem to believe nonetheless that effective enough surveillance could prevent such rebellions. But, apart from any other considerations, such a proposal assumes that a human being can invent a system that another human being can’t game. The evidence for that proposition is, at best, mixed.

There is, in any event, some chance that such a black ball technology can’t really exist. All technologies are limited by one factor or another, which is why technology can so often integrate into the life of the planet without simply destroying it. We may not always be so lucky but Bostrom and van der Merwe’s thesis doesn’t attempt to show that there must be a black ball in our future. Only that there might be.

The paper on which their vulnerable world hypothesis is based is open access.


Some sci-fi hypotheses as to why we do not meet intelligent aliens riff off the black ball idea, for example:

The Berserker Hypothesis: Did the smart machines destroy the aliens who invented them? A smart deadly weapon could well decide to do without its inventor and, lacking moral guidance, destroy everything in sight.

and

The Kardashev scale: If it was easy for life in the universe to evolve from mud to advanced space exploration, we should indeed be seeing lots of aliens, Star Trek–style. So, if we don’t, perhaps a catastrophe lies between where we are now and where they would need to be to reach us.

Also: You can read a number of other hypotheses as to why we do not see intelligent aliens here.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

AI profs: Beware “Black Ball” Tech That Could Destroy the Planet