Mind Matters Natural and Artificial Intelligence News and Analysis
robot-concept-or-robot-hand-chatbot-pressing-computer-keyboard-enter-stockpack-adobe-stock.jpg
Robot concept or robot hand chatbot pressing computer keyboard enter

Can a Machine Really Write for the New Yorker?

If AI wins at chess and Go, why not? Then someone decided to test that…
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email
George Gilder talking with Peter Robinson at the Hoover Institution

Tech philosopher and futurist George Gilder (pictured) has a new book out, Gaming AI.

Short and sweet, it explains how artificial intelligence (AI) will—and won’t—revolutionize the economy and human life. Get your free digital copy here.

And now, below is a short piece he wrote, unpacking one of the book’s themes—the claim that AI can do anything that humans can do. Find out why he says no:


Ilya Sutskever (pictured) may be the smartest man in the world you have never heard of. No sweat, I hadn’t heard of him either.

Still under 40, he’s part of the all-male Google mindfest around “Google Brain.” His IQ honed at Open University of Israel and mentored by Artificial Intelligence (AI) pioneer Geoffrey Hinton at the University of Toronto’s Machine Learning center, Sutskever is a nerd of many mind-blowing marvels.

Coinventor both of Google’s multiple world champion AlphaGo automated game player and its machine-learning toolset TensorFlow, he is also a seminal figure in the “convolutional” neural nets that are the heart of the multi-trillion-dollar post-COVID big data economy. For “convolutional,” think of the folds and feedback loops and levels of your brain as it detects, identifies, and understands images.

Sutskever has now gone on from Google to become chief scientist at OpenAI, a non-profit venture funded in part by Peter Thiel, who brings philosophical depth and practical sense to the definition of smart. Thus Thiel’s big data and security company Palantir, born of 9-11 and biased toward defense of human minds rather than their replacement, is now going public, while Sutskever goes non-profit at OpenAI.

OpenAI wants to save the world from the alleged threat of runaway machines. Like COVID, AI is deemed an “existential threat” by intellectuals such as Elon Musk and the late Stephen Hawking, with more imagination than practical profundity. I am not sure why Thiel is involved in this pursuit, but he also wants to reorient AI toward the human race. I’m betting on Thiel’s business insights over his non-profit fancies.

At OpenAI, Sutskever has launched GPT3, the third-generation jargon blaster “Generative Pretrained Transformer.” Wearing a tee-shirt inscribed “The future will be unsupervised,” Sutskever’s goal is to produce a machine that writes stories, understands them, and can explain them without human “supervision.” It’s the epitome of the utopian dream of replacing human brains with self-learning and improving machines.

This epoch’s prime battleground in both technology and philosophy, AI at its worst aims at a new demotion of the human race and a new triumph of Communism. Deeming the human brain a suboptimal product of random evolution—a mere “meat machine”—the new computer science sees no limit to the ongoing ascent of computers and the corresponding descent of humans.

With China leading the world in AI expenditures and AI deployments in so-called “smart cities,” the leaders of the Chinese Communist Party (CCP) may even believe that AI can make socialism work.

Pivotal to this advance was a Google machine-learning triumph in the
ancient game of Go. Invented in China some four thousand years ago, Go is a game of territorial control and maneuver, providing 361 points of intersection for patterns of black and white stones successively positioned in elaborate geometries across the board. The player who surrounds and captures the most territory wins.

In Seoul in 2016, Lee Sedol, a 33-year-old-Korean and the 18-time human champion of Go, played against AlphaGo, a machine learning program created by Google’s DeepMind division led by Hinton and Sutskever. And Sedol lost four out of five engagements.

More portentous still, in October 2017 Google’s DeepMind launched AlphaGo Zero. This version was based solely on reinforcement learning, without direct human input beyond the rules of the game. In a form of “generic adversarial program,” AlphaGo Zero vied against itself repeatedly billions of times. It became its own teacher.

“Starting tabula rasa,” a paper by the developers concludes, “our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.”

The program employed two key machine learning techniques. One, invented by venerable Google guru Hinton, is “backpropagation.”

Feeding back the errors, this method corrects the system by adjusting all the neural weights of its “neuron” filters. The entire network adapts until the outputs conform to a pattern of targets—such as a winning position in Go. The second breakthrough is “genetic programming,” popularized by University of Michigan’s John Holland (1929–2015) which “evolves” new techniques by competitive survival of the fittest. Using similar techniques, AI has mastered such previously intractable fields as protein folding (a 2019 Google accomplishment) and stock market trading (Renaissance Technologies, 1995-2020).

In retrospect, AI skeptics like me disparage such feats as mere rote computer processing. The program can make millions of moves or “investments” or logical steps a second while its human adversary mulls fecklessly over one. This advantage should be enough to prevail in any logical competition that doesn’t have intrinsic information entropy or surprise.

But none of us AI critics actually predicted such a machine success as the Go championship. In a game of logic and strategy, a machine learned how to defeat a world champion human by dint of computer pattern-recognition and feedback loops alone.

No question, that’s an awesome achievement, and Sutskever was at the center of it. Thus, his plausible claim to lead the human race in intelligence. Sutskever now believes, as he told New Yorker scribe John Seabrook, that even people like himself may well soon be eclipsed in creativity, intelligence, and writing ability by a machine.

“If you train a system which predicts the next word well enough then it ought to understand,” he said. “Researchers can’t disallow the possibility that we will reach understanding when the neural net gets as big as the brain.”

As I have explained in Life after Google and in a new short book, Gaming AI, forthcoming from Discovery Institute, I regard this Sutskever faith as a stupid materialist religion.

As philosopher Charles Sanders Peirce (1839–1914, pictured) showed early in the 20th century, logical systems such as mathematics or computational Boolean algebra consist of symbols and objects. Like maps and territories, they are not self-evidently linked. They require a human “interpretant” to make the connections across the inevitable epistemic gap.

A game like Go is entirely a map or symbol system. No territory is involved, so it can be “won” without “understanding” or interpreting anything. Black and white stone symbols are all there is.

Now Sutskever is using the same essential technology to create GTP3, which seeks to “understand” words and stories. GTP3 is the Peircean interpretant between its own symbols, which are words, and its objects, which are the fabric of mind, narrative, story, and meaning.

In its effort to achieve an author’s creativity or imagination, entropy or surprisal, the GTP3 writer-interpreter makes the blunder of using randomness, or “stochastic” techniques. But randomness does not add information. It subtracts information. Randomness conveys entropy but not meaning. It resembles creativity mathematically but is actually just noise. Confusing the two is the fundamental error of prevailing computer fashions.

Seabrook’s New Yorker story tells of GTP’s failure, given the entire New Yorker archives, to write a “New Yorker” story that made any sense at all. Treating words like musical notes, or Go positions—symbols without objects—GPT produced an accurate simulation of language without its meaning. It thus generated a tintinnabulation of New Yorker sounds without deeper significance. That’s called gibberish.

My Prophecy: If GPT3 could actually make a general-purpose machine learner and writer that could outperform humans, there would be little or no market for any of our other companies or prophecies. We could all retire to the beach.

But Sutskever’s OpenAI is Silicon Valley religion and will probably dwindle into an enthusiasm of AI dilettantes.

Note: Get your free digital copy of Gaming AI here.


Also: At Mind Matters News, we tried testing some of our own copy last year, to see if an autobabble detector could distinguish it from machine output. It did, but some of the other results were quite interesting too.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Can a Machine Really Write for the New Yorker?