We are told by Purdue University researchers that “many eye-popping findings that were based on this dataset and received high-profile recognition are false after all.” In this case, there was a simple error:
The Purdue team performed extensive tests over more than one year on the dataset, which looked at the brain activity of individuals taking part in a study where they looked at a series of images. Each individual wore a cap with dozens of electrodes while they viewed the images.
The Purdue team’s work is published in IEEE Transactions on Pattern Analysis and Machine Intelligence. The team received funding from the National Science Foundation.
“This measurement technique, known as electroencephalography or EEG, can provide information about brain activity that could, in principle, be used to read minds,” said Jeffrey Mark Siskind, professor of electrical and computer engineering in Purdue’s College of Engineering. “The problem is that they used EEG in a way that the dataset itself was contaminated. The study was conducted without randomizing the order of images, so the researchers were able to tell what image was being seen just by reading the timing and order information contained in EEG, instead of solving the real problem of decoding visual perception from the brain waves.” …
The Purdue researchers originally began questioning the dataset when they could not obtain similar outcomes from their own tests. That’s when they started analyzing the previous results and determined that a lack of randomization contaminated the dataset.Purdue University, “Researchers uncover blind spots at the intersection of AI and neuroscience” at ScienceDaily (December 15, 2020)
The study requires a subscription.
What doesn’t require a subscription is the years of hype: “Mind Control: How EEG Devices Will Read Your Brain Waves And Change Your World. (2012), “Device that can literally read your mind invented by scientists” (2017), “Mind-reading A.I. analyzes your brain waves to guess what video you’re watching”(2019) … among many others.
For now: Read it and sleep.
Note: The problem with not randomizing the images is something like this: If you are the researcher, you may know that Image 10 is a picture of a dog. (Maybe you chose it yourself.) So everything you are seeing in the volunteer’s brain is going to look to you like someone seeing a picture of a dog. Is that true? The only way to be sure if there is a specific signal from seeing a dog is if you don’t actually know what the volunteer is seeing; you are only recording signals from the brain. Then later, signals are compared with pictures. Neither you nor the volunteer subject knew what to expect.
You may also enjoy: How far has AI mindreading come? Further than we may think. And some trends are troubling. On the one hand, high tech can help the blind see and amputees feel. It’s not a miracle; the human nervous system can work with electronic information. That said, one AI ethics analyst notes, we are heading complacently into a world of brain transparency with no protections in place. (But maybe a lot of it isn’t true anyway,)