Mind Matters Natural and Artificial Intelligence News and Analysis
scott-webb-500230-unsplash
Photo by Scott Webb on Unsplash

Can a Game Prove That Computers Could Really Think?

Philosopher Daniel Dennett thinks so. Let's apply Occam's Razor and see
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In his paper “Real Patterns,” Tufts University philosopher Daniel Dennett writes the following:

In my opinion, every philosophy student should be held responsible for an intimate acquaintance with the Game of Life. It should be considered an essential tool in every thought-experimenter’s kit, a prodigiously versatile generator of philosophically important examples and thought experiments of admirable clarity and vividness.

One of the reasons why Dennett likes the Game of Life is because he thinks it can help us understand how computers could be genuinely intelligent. Now I do think the Game of Life provides us with some interesting thought experiments, but precisely for the opposite reason to Dennett: the Game of Life simulation makes it manifestly obvious that conventional computers could never be intelligent in the strong sense of the word.

Conway’s Game of Life is not a game in the conventional sense. Rather, the Game of Life is a computer simulation that takes place on a two-dimensional square grid. Each of the cells in the square grid can be in one of two states: a cell can either be alive, or a cell can be dead. The precise rules of the game need not concern us here. The main point of the Game of Life is that, from its very simple rules, a rich diversity of complex behavior can be simulated.

Red glider on the square lattice with periodic boundary conditions/Lev Kalmykov (CC BY-SA 4.0)

Now one of the reasons why some AI proponents are interested in the Game of Life is that a Universal Turing Machine can be implemented in it. A Universal Turing Machine can emulate any kind of computer software system. Because of this, AI proponents like to make the following kind of argument:

1. A Universal Turing Machine can do anything a conventional computer can do.
2. The Game of Life can do anything a Universal Turing Machine can do.
3. Anything that can pass the Turing Test has cognitive states.
4. One day a conventional computer will pass the Turing Test.
5. One day the Game of Life could have cognitive states.

However, as valid as this argument may be, the third and fourth premises are far from obvious, so if you are a non-AI proponent, you can simply do a modus tollens and argue as follows:

1. A Universal Turing Machine can do anything a conventional computer can do.
2. The Game of Life can do anything a Universal Turing Machine can do.
3′. The Game of Life could never have cognitive states.
4′. A conventional computer could never have cognitive states.

figure 1

A universal Turing Machine in Conway’s Game of Life/Paul W. Rendell published 2011 in 2011 International Conference on High Performance… DOI:10.1109/HPCSim.2011.5999906

To explain why I think premise (3′) is true, I’m not going to question an important assumption Dennett makes, that something must be able to change its internal state in response to its environment if it is to possess cognitive states that mentally represent its environment. By granting Dennett this assumption, we can use it to argue why the Game of Life does not possess cognitive states. I’m inclined to think there is a lot more to an entity having cognitive states than its ability to change its internal state in response to its environment. Nevertheless, if we grant that Dennett is correct in this thesis and suppose the existence of entities in the Game of Life possessing cognitive states, then we should be able to identify these entities.

The first place to look for such entities would be at the single cell level. If one looks at a single cell as the Game of Life runs its course, at various times we would see it become live, and at other times we would see it die. We could, therefore, ask, in what ways can a single cell respond to its environment? Well, there are 256 possible combinations of live and dead cells immediately surrounding a single cell, though the cell is only able to differentiate between three different types of combination. Now, one would be hard-pressed to defend the claim that even something that could distinguish between only 256 possible types of environmental conditions has cognitive states. Therefore, a fortiori, something that can distinguish between only three possible environmental conditions is not going to have cognitive states.

If the single cells in the Game of Life do not have cognitive states, what else might have them? Maybe a particular system of cells has a cognitive state. However, for such a claim to be intelligible, we need to make sense of what it means for something to be a system possessing internal states.

The trouble is, Dennett is rather vague on what he means by a system. What determines when something belongs to a system and when it doesn’t? Are we to think that every subset of things in the universe is a system, and if so, does this simply reflect our own subjective thoughts about the universe, or does it reflect our recognition of a higher level ontology possessed by the systems themselves? Dennett adopts the latter view.

Dennett’s ontological claim raises the obvious question: Why shouldn’t we apply Occam’s razor and refrain from positing the existence of more entities than are required to account for a given state of affairs? With the Game of Life, all you need to do is specify the states of all the individual cells and nothing further is required to explain what’s going on.

Dennett’s response is that Occam’s Razor doesn’t apply to the Game of Life. While I agree with Dennett that Occam’s Razor shouldn’t be used overzealously, we shouldn’t be too reluctant to use it either. The reason why Dennett rejects Occam’s Razor in the Game of Life is that if he didn’t, then nothing in the Game of Life would be capable of possessing cognitive states. But that surely has the order of explanation the wrong way round. If an ontologically parsimonious account of the Game of Life cannot account for cognitive states, then perhaps the Game of Life is the wrong way to go about understanding cognitive states: The Game of Life is simply a bad model of cognitive reality. In other words, there is no obvious way in which the Game of Life possesses cognitive states, and hence no way in which a conventional computer could have cognitive states either.

Here is a longer version of this essay.

Fr. Robert Verrill is an English Dominican Friar who was ordained to the priesthood in 2012. Having completed a master’s in Philosophy at the Dominican School of Philosophy and Theology in Berkeley, California, he has been working on his doctorate in philosophy at Baylor University in Waco, Texas, since 2016. Before joining the Dominican Order, Robert studied mathematics at Cambridge in England and completed a doctorate in operator algebras and conformal field theory. He then worked in the sonar industry for several years as a systems/software engineer. He joined the Dominicans in 2006 and completed a Sacred Theology Baccalaureate at Blackfriars, Oxford in 2012. Fr. Robert loves the philosophy and theology of St. Thomas Aquinas and is particularly interested in trying to understand modern physics from a Thomistic perspective.

See also: AI That Can Read Minds? Deconstructing AI Hype The source for the claims seems to be a 2018 journal paper, “Real-time classification of auditory sentences using evoked cortical activity in humans.” The carefully described results are indeed significant but what the Daily Mail article didn’t tell you sheds a rather different light on the AI mind reader. (Robert J. Marks)


Can a Game Prove That Computers Could Really Think?