Just recently, a British popular science magazine warned us to beware of the day, coming soon, when Big Data will predict our future. Multi-agent artificial intelligence (MAAI), we are told, “allows predictions to be made with extraordinary accuracy” by creating”entire artificial societies” on which to test ideas:
Want to know what will happen if 20,000 Syrian refugees arrive in a city in western Europe? Build an artificial society and watch. Want to know how to make the integration of those immigrants peaceful? Build an artificial society, try things out and see what works. Want to stoke anti-immigrant hostility or design a disinformation campaign to win an election…?“Predicting the future is now possible with powerful new AI simulations” at New Scientist
The confidence seems boundless:
Unsurprisingly, it isn’t a trivial undertaking, taking about a year. But once validated, you are ready to play God. That might just mean setting initial conditions and watching how things pan out. It might mean testing an intervention that you think might help – say, pumping resources into a deradicalisation programme. Or it might mean asking the simulation to find a pathway to a desirable future state.“Predicting the future is now possible with powerful new AI simulations” at New Scientist
But surely bad actors could use the program just as easily as New Scientist’s virtuecrats? Even fans admit that:
Given the current political climate around the planet, however, MAAI will most certainly be put to insidious means. With in-depth knowledge comes plenty of opportunities for exploitation and manipulation, no deepfake required. The intelligence might be artificial, but the target audience most certainly is not.Derek Beres, “Can AI simulations predict the future?” at Big Think
But how sure are we that, in an uncertain, constantly shifting world, elaborate simulations would work as well as claimed? What happened, for example, when IBM’s AI Jeopardy king Watson was tried out in medicine and the stock market?
- Why was Watson a flop in medicine? According to Pomona College professor Gary N. Smith (right), the basic problem was that Watson could sort through reams of data very quickly but couldn’t understand “which medical articles are reasonable and which are bull.” It might be awkward to write a program for that.
- And the stock market? “Investor, AI isn’t your big fix. Again, Prof Smith found that, in investing and elsewhere, an AI label is often more effective for marketing than for performance: “Watson’s Jeopardy win was stunning but the ability to search a database for facts and use a lightning-fast electronic finger to push a button don’t have much to do with predicting whether the price of Apple stock is about to go up or down.”
- And then there’s ADA, the Democratic party’s political computer system that measured everything but enthusiasm in 2016. Oops. Smith: “Some people say, if you can’t measure it, it doesn’t count but sometimes the things that count can’t be measured.”
A safe prediction. Some will try MAAI and many stories will follow.
For more things in life that are hard to script, courtesy Gary Smith:
“A BABY, A GEEK, and a COW” all walk into a bar…
We see the pattern! But is it real? It’s natural to imagine that a deep significance underlies coincidences. Unfortunately, patterns are not always a source of information. Often, they are a meaningless coincidence like the 7-11 babies this summer.