A 2018 book by political scientist Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor argues that it will: “automated systems entrench social and economic inequality by design and undermine private and public welfare.”
In his just-published book, The Human Advantage: The Future of American Work in an Age of Smart Machines, Jay Richards takes a different tack: Historically, innovation has created opportunity but required huge changes in how people view work:
Most of the work of the future doesn’t exist yet, so you can’t specialize for it. Sure, tech skills are valuable, and many of them can be learned on the cheap (more on that in a bit). A degree in engineering, science, or business still promises a good rate of return.27 You won’t go hungry if you avoid too much student debt,28 get good grades at a good school, graduate with a BS in a high-demand field such as computer engineering, and move to where the jobs are.
But don’t imagine that a high-tech economy requires us all to become coding wizards, any more than being a NASCAR driver requires you to be a mechanical engineer. Instead, you should develop a suite of skills that allows you to adapt quickly. (pp. 84–85)
So he sees the main problem as learning to adapt to a rapid-moving AI economy.
As result, he thinks that we should not fear the robots so much as the robot philosophers. He cites one:
Take Don Howard. He’s a philosophy professor at Notre Dame, a leading Catholic university. In a “Think” piece at NBC.com, he asks whether robots deserve human rights.
I read it expecting a Notre Dame philosopher to take on the bad arguments for “strong AI” (the idea that computers will become conscious persons). Instead, he takes strong AI for granted.
So far as I can tell, he accepts strong AI hokum hook, line and sinker, despite well-known objections to it. In truth, there’s no more reason to think computers will become conscious than to think that strong tractors will become oxen. The whole argument rests on the assumption that we and computers are basically the same kinds of things.
I don’t blame the man on the street for worrying about killer robots. His ideas are formed by thousands of hours watching sci-fi movies like Star Trek and Terminator. But a philosopher should know better. A philosopher at Notre Dame should really know better. More.
One would think that Stephen Hawking (1942-2018) and Elon Musk would know better too. Yet they have played to the panic, possibly in part because they knew instinctively that panic is the story media understand and want. These days, it’s harder to sell a story whose main message is that we should acquire a broad skill set and avoid chasing away the science prof…
It may also be worth asking whether, for some, there is an upside to panic. Many of us think that government and corporate surveillance and invasion of privacy is a pressing concern. But others may see those outcomes as a promising opportunity to remake society the way they think it should be. We’ll be discussing these questions in future posts.