Mind Matters Natural and Artificial Intelligence News and Analysis
gmo-farming-stockpack-adobe-stock.jpg
GMO Farming

Is GMO Detection an Application of Dembski’s Explanatory Filter?

If so, it would be an instance of the use of the filter in biology
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Have you ever heard people say that intelligent design (ID) theory has never been applied to biology? They are wrong! In fact, it is applied frequently in the very important field of detecting genetically modified organisms (GMOs). “A genetically modified organism contains DNA that has been altered using genetic engineering.” (National Geographic) Detection can trace the use of GMOs, now frequent in our food supply, so that products can be recalled if there is a problem or if people just don’t want to use GMO products.

GMOs are intelligently designed biological organisms, and scientists use design theorist William Dembski’s explanatory filter to detect GMOs.

My claim is a bit daring, perhaps alarming for some people. Maybe I’m stretching the definition of Dembski’s filter merely to make a point. Dembski published his explanatory filter in the 90s, and GMO detection techniques appeared around the same time. Nonetheless, it is unlikely these techniques were directly or indirectly inspired by Dembski’s filter. But researchers could in fact be using the technique without knowing it.

So let’s break my claim down and see whether GMO detection is an honest to goodness application of the explanatory filter.

To begin, let’s cite Dembski’s filter from his book The Design Inference (Cambridge University Press, 2006) (Section 2.1).

The first step in using the design filter is to eliminate events of high probability. These are events that are expected to almost certainly occur. If the Road Runner pushes an anvil onto Wile E. Coyote’s head, it is almost certain to raise a large bump and cause small stars to orbit Wile E’s head.

Dembski: “To say that [event] E is highly probable is to say that given the relevant antecedent circumstances, E will for all practical purposes always happen.”

The second step is to eliminate events of intermediate probability. These events are somewhat rare but still probable enough that they can happen from time to time through the luck of the draw. An example would be two people meeting in a small crowd and sharing the same birthday Withe 365 days to choose from this seems too unlikely to occur by chance. But due to the Birthday Paradox, you only need a gathering of 23 people to have a greater than 50% chance that two of them share a birth date.

Dembski: “Events of intermediate probability, or what I’m calling IP events, are the events we reasonably expect to occur by chance in the ordinary circumstances of life.”

But now what if we have an event of small probability, one that is unlikely to ever occur by chance, even if we wait until the heat death of the universe? The surprising thing is such events are actually very common.

For example, the specific sequence of a thousand fair coin flips will only occur once in the universe’s lifetime. The fact we can generate many such sequences of impossibly small probability very quickly is the key to modern cryptography, privacy, and communications technology. So, it is not enough to demonstrate design that events have very small probability. What sets aside an event as intelligently designed is the third step in the explanatory filter: specification.

Dembski: “Specifications are common in statistics, where they are known as rejection regions. The basic problem of statistics is the testing of probabilistic hypotheses. Statisticians are continually confronted with situations in which some probability distribution is assumed to be operating, and then given the task of determining whether this probability distribution actually is operating. To do this statisticians set up a rejection region and then take a sample. If the observed sample falls within the rejection region, the presumption is that the probability distribution in question was not operating to produce the sample. On the other hand, if the observed sample falls outside the rejection region, the probability distribution in question is taken, at least provisionally, as adequate to account for the sample.”

The specification step has one very important qualification. In order for an event to be specified, it must be concisely described by an external source. It is essential that this external source be “detachable.” What does detachability mean? It means that the source must be derived independently from the event being examined.

Here ‘s an example of a specification that is not detachable. A student fills out a multiple choice exam using the answer key and is then graded using the same answer key. The very process used to generate the test solution is the process used to grade the test — and is consequently not independent from the test solution. On the other hand, if the test is filled out by a student using the material learned in the course, the answer key is detachable from the test. Thus it becomes a valid specification for detecting intelligently designed answers as opposed to the student filling out the choices randomly.

As you see, each of these steps in the filter is cached out in rigorous mathematical terms. There is no room for fudging when applying the filter. Therefore, if we identify a process that follows these steps, we have identified an application of Dembski’s explanatory filter.

So let us look again at the process for identifying GMOs. At a high level, the process is to look for matches of known GMO DNA sequences in the DNA sequences extracted from an organism. This process does not look quite the same as the filter at first glance. It appears to consist of only a single step, compared to the filter’s three steps. However, if we look at the single step carefully, we can see it is indeed an application of the filter. The relationship becomes clear when we note that not just any GMO sequence will do.

First, if the sequence is common to all organisms, then it is a near certainty the sequence will be found. Thus it will fail to discriminate between between GMOs and non-GMOs. Restricting sequences to uncommon sequences is an application of the first step of the explanatory
filter: elimination of events of high probability.

Second, if the sequence is too short, then it will be found by random chance merely by looking at a large enough collection of DNA samples. Trivially, if the sequence is a single base pair (either A, T, G or C), then we will find it in any DNA sample. If the sequence is 10 base pairs long, then if we check a million DNA samples we may expected to find at least one occurrence. As sequences that can occur purely through chance, they are events of intermediate probability. To avoid the possibility of a match by chance, the GMO sequences must be fairly long. That is an application of the second step of the explanatory filter: eliminating events of intermediate probability.

What of the third and final step: specification? This is achieved by the match against the DNA sequence database derived from GMOs. In this case, the specification is the identification number of the matching sequence. If the database is large, then the identification number will be large, and will consequently not be concise enough to signify intelligent design. On the other hand, if the database is small, then the number will also be small and will be a concise specification.

But what of the detachability criterion for the specification data source? This is the crucial point where intelligent design comes into the picture. We know that the GMO DNA sequences have been intelligently designed by humans and are consequently generated by a very different process from the one that created natural organism DNA sequences.

Thus, when we detect a match, we know the match is not due to the process that created natural organisms and the specification data source is detachable.

With that, we see there is a direct one-to-one mapping between Dembski’s explanatory filter and the process of detecting GMOs. As such, GMO detection counts as a real world, and very important, application of Dembski’s explanatory filter.


You may also enjoy: this article by Eric Holloway: Does information their support design in nature? William Dembski makes a convincing case, using accepted information theory principles relevant to computer science.


Eric Holloway

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Eric Holloway is a Senior Fellow with the Walter Bradley Center for Natural & Artificial Intelligence, and holds a PhD in Electrical & Computer Engineering from Baylor University. A Captain in the United States Air Force, he served in the US and Afghanistan. He is the co-editor of Naturalism and Its Alternatives in Scientific Methodologies.

Is GMO Detection an Application of Dembski’s Explanatory Filter?