The term “Big Data” has dominated nearly every conversation about technology and business. Many people seem to believe that we can replace thinking about our problems with the simple application of statistics to large samples of data or with digital models of persons interacting on a digital stage.
New Scientist recently touted the abilities of “multi-agent artificial intelligence” (MAAI) to predict the future of interactions within a society. Essentially, proponents claim that if we want to know what happens in a given social context, all we need to do is build an “artificial society” using standard simulation tools and see what happens. Voila! Future predicted.
Of course, as is usual in these cases, nothing in the simulation models what will happen if the participants whose behavior is modeled have and use such a tool against the researchers!
A possible duel of tools is certainly not the only problem with relying on Big Data to predict behavior. The biggest problem is that human behavior is not as predictable as the models imply. Additionally, many models are ridiculously simplistic, making the results worse than worthless. They become a way of solidifying biases.
At the end of last year (2018), for example, we got a clear demonstration of this problem with the publication of a paper claiming to use artificial intelligence to solve problems of religious violence. The actual content of the paper was so simplistic as to be laughable but the hype around “artificial intelligence” allowed it to pass peer review and be published as if it were science.
As I said in a Mind Matters News post on the subject at the time, “nearly every claim about the paper seems to misunderstand how computer models work generally and how they worked in this paper in particular.” Worse, the misleading science media reports weren’t pure imagination on the part of science writers; they seemed to derive from materials supplied through a university, in this case, Oxford.
With hype so pervasive, one thing we can safely predict is lots of possible entries for another Top Ten countdown at the end of 2020.
2019’s #1 hype is to be announced shortly. Watch for it!
2019 AI Hype Countdown #3: Quantum Supremacy? Less supreme than it sounded. It’s possible that Google’s quantum result can be generalized to more useful scenarios than the test case though it isn’t immediately obvious how. What Google really achieved was increased stability in its quantum computing platform. Keeping qubits stable has been a hard problem in quantum computing for a long time. This event was certainly a step forward, but advertising it as “quantum supremacy” was a classic exercise in hype.
2019 AI Hype Countdown #4: Investment: AI beats the hot stock tip… barely At the end of the day, AI-based investing actually performed like a bad index fund. Artificial intelligence may do well summarizing data, but the new insights that will lead the economy forward cannot be gleaned that way. What we need is not old data but new truths.
2019 AI Hype Countdown #5: Transhumanism never grows old. The idea that we can upload our brains to computers to avoid death shows a fundamental misunderstanding of the differences between types of thinking. Computers are very effective but they operate with a very limited set of causal abilities. Humans work from an entirely different set of causal abilities. Uploading your brain to a computer is not a question of technology. It can’t work in principle.
2019 AI Hype Countdown #6: In May of this year, The Scientist ran a series of pieces suggesting that we could automate the process of acquiring scientific knowledge. In reality, without appropriate human supervision, AI is just as likely to find false or unimportant patterns as real ones. Additionally, the overuse of AI in science is actually leading to a reproducibility crisis.
2019 AI Hype Countdown #7: “Robot rights” grabs the mike. If we could make intelligent and sentient AIs, wouldn’t that mean we would have to stop programming them? AI programs are just that programs. Nothing in such a program could make it conscious. We may as well think that if we make sci-fi life-like enough, we should start worrying about Darth Vader really taking over the galaxy.
2019 AI Hype Countdown #8: Media started doing their job! Yes, this year, there has been a reassuring trend: Media are offering more critical assessment of off-the-wall AI hype. One factor in the growing sobriety may be that, as AI technology transitions from dreams to reality, the future belongs to leaders who are pragmatic about its abilities and limitations.
2019 AI Hype Countdown #9: Hype fought the law and… Autonomy had real software but the hype around Big Data had discouraged Hewlett Packard from taking a closer look. Autonomy CFO Sushovan Hussain was sentenced this year to a five year prison term and a ten million dollar fine because he was held “ultimately responsible for Autonomy’s revenues having been overinflated by $193m between 2009 and the first half of fiscal 2011.”
2019 AI Hype Countdown #10: Sophia the robot still gives “interviews”. In other news, few popular media ask critical questions. As a humanoid robot, Sophia certainly represents some impressive engineering. It is sad that the engineering fronts ridiculous claims about the state of AI, using partially scripted interactions as if they were real communication.