Recently, researchers discovered that fruit flies use a filter similar to a computer algorithm to assess the odors that help them find fruit, only the flies’ tools are more sophisticated:
When a fly smells an odor, the fly needs to quickly figure out if it has smelled the odor before, to determine if the odor is new and something it should pay attention to,” says Saket Navlakha, an assistant professor in Salk’s Integrative Biology Laboratory. “In computer science, this is an important task called novelty detection.
Computers use a Bloom filter for that, Navlakha, an integrative biologist, explains:
When a search engine such as Google crawls the Web, it needs to know whether a website it comes across has previously been indexed, so that it doesn’t waste time indexing the same site again. The problem is there are trillions of websites on the Web, and storing all of them in memory is computationally expensive. In the 1970s, Howard Bloom at MIT devised a data structure that can store a large database of items compactly. Instead of storing each item in the database in its entirety, a Bloom filter stores a small “fingerprint” of each item using only a few bits of space per item. By checking whether the same fingerprint appears twice in the database, a system can quickly determine whether the item is a duplicate or something novel.
In the fly brain, neurons called “Kenyon cells” broadcast a “novelty alert” when a new odor is encountered. However, the fly introduces a couple of twists:
The first twist involves not just determining whether you’ve smelled the exact same odor before, but rather if you’ve smelled the odor, or something pretty similar to it. This is important in the brain because chances are that you’ll never smell the exact same odor twice. The second twist involves determining how long ago you’ve smelled the odor. If it’s been a long time, then the odor’s novelty should be higher than if you’ve smelled the odor rather recently. “To Detect New Odors, Fruit Fly Brains Improve on a Well-known Computer Algorithm” at Salk News Paper. (subscription required)
Building novelty detection into a computer is a bit of a challenge, Robert J. Marks acknowledges, “Novelty detection is an old problem in real and artificial neural networks. The dilemma in learning is a tradeoff between strengthening one of your old categories (stability) and learning something new (novelty). I’ve heard it described as the tradeoff between being skeptical and being gullible or between stability and plasticity:
The Stability-Plasticity Problem The term is a bit of a misnomer, in that stability-plasticity merely highlights a problem (or dilemma) with conventional artificial neural network learning models. The general behavior of achieving stability and plasticity simultaneously in an adaptive system is not really a dilemma at all. The human brain is a perfect example of a system that quite handily achieves that goal. For that matter, so is the mouse brain. Since it involves asking the question: “How is simultaneous stability and plasticity facilitated within biological learning systems?” perhaps a better label might be, “The Stability-Plasticity Question.”
One risk is “catastrophic forgetting,” a common problem with artificial neural networks that results in “ the catastrophic loss of previously learned responses, whenever an attempt is made to train the network with a single new (additional) response.”
The problem can be overcome by training the network only partially, in stages. In any event, the humble fruit fly must certainly avoid any such catastrophic loss of learned behavior just to feed itself in a world where it must constantly detect changing odors.
Marks was part of a group that used artificial neural networks to assess novelty in a paper published in 2002:
Twenty years ago we applied novelty detection to multi-ton rotating rotors for Southern Edison California. We collected data and trained a neural network to assess whether data from the rotor was not normal, that is, was the rotor operation “novel”? We encountered the stability–plasticity dilemma with respect to where we should put the threshold before which the novelty was identified. If the operation was novel, it signified probable danger to come. The rotors could vibrate, come unseated, bounce across the floor, and crush a car or two. Novelty detection raised an early red flag.*
Could a neural network just create new information to overcome some of these problems? Robert J. Marks and Eric Holloway recently presented a paper showing that unbounded creation of novelty by a computer program is not possible. This seems to be implicitly recognized by others. For example, in “Life as Evolving Software,” computer scientist Gregory Chaitin writes, regarding a proposed theory of evolution “’The Busy Beaver function BB(N) grows faster than any computable function. That evolution is able to “compute” the incomputable function BB(N) is evidence of creativity that cannot be achieved mechanically. This is possible only because our model of evolution/creativity utilizes an uncomputable Turing oracle.” (2012)
Chaitin’s theory of evolution need not detain us here but it requires the creation of novelties that are beyond the reach of a computer (“incomputable”). Essentially, a computer can only do mathematical operations. It can work with numbers, which can be subjected to mathematical operations, but not with abstractions. The trouble is, new ideas (novelties) are not typically produced by adding things up alone. That is, we don’t get new ideas simply by adding up a larger number of older ideas. New insights tend to be abstractions. They can be described but cannot be reduced to numbers or at any rate, not in the form in which they first appear. Suppose our idea is, “What if bats find their way around at night by bouncing sound waves off objects?” (echolocation). We can generate and compute many numbers while testing the idea but we cannot assign a number to the idea itself. If the question is not the result of a computation, a computer will not come up with it.
Computer pioneer Alan Turing called such a process (the genesis of the idea can be described but not computed) an “oracle machine.” Eric Holloway offers a simple example: How do we know that the natural numbers (1, 2, 3… ) go on forever so that we will never come to the last number? One way is this: We can imagine the largest possible number (n), much larger than the number of all things that have ever existed. It’s pretty large—but anyone can come along and say “n plus 1” and they have created a larger number. Thus, the largest possible number cannot be reduced to a number itself. It is therefore not computable. The natural numbers have a beginning but no end.
The computer, however, does not grasp this fact, Holloway says: “The machine will just think n+1 is the largest number, and repeat the cycle. It would get stuck on that one step, n+1, and keep doing the same step without end. It cannot realize that the process will never end, and avoid getting stuck in an endless loop. We humans can infer this premise after one or two repeats, but a computer cannot. The incomputable process used by humans to infer the axiom of infinity is called a “halting oracle.”
The computer can be prevented from going into a loop by programming around it by adding the axiom of infinity to the program; what the
computer can’t do is independently abstract the fact that the natural numbers are a series with a beginning but no end. There is no way to
compute that; it must be understood via insight into the nature of the problem. This is because the axiom of infinity cannot be derived, it
can only be assumed as a starting premise.
How does this concern the tiny fly buzzing around the fruit bowl? Somehow, the fly has the ability deal with comparatively fuzzy problems like “Is this Granny apple enough like a Gala to be treated the same?” and “Was this pear here last week?” which, the researchers say, are difficult to program into artificial neural networks. However the fly acquires these skills, to the extent that they involve novelty detection, they offer researchers a fascinating challenge.
* – Benjamin B Thompson, Robert J Marks II , Jai J Choi, Mohamed A El-Sharkawi “Implicit Learning in Autoencoder Novelty Assessment,” Proceedings of the 2002 International Joint Conference on Neural Networks, 2002 IEEE World Congress on Computational Intelligence, May 12-17, 2002, Honolulu, pp. 2878-2883; R.J. Streifel, R.J. Marks II, M.A. El-Sharkawi and I. Kerszenbaum “Detection of Shorted-Turns in the Field Winding of Turbine-Generator Rotors Using Novelty Detectors: Development and Field Test,” IEEE Transactions on Energy Conversion, vol.11, no.2, June 1996, pp.312-317.
See also: Human intelligence as a halting oracle (Eric Holloway)
Has neuroscience disproved thinking? (Eric Holloway)