Can Probability Theory Be Used to Refute Evolution? (Part Two)
September 19, 2005
Arguments based on probability theory are a mainstay of creationist literature. There you can find elaborate calculations purporting to measure the probability that a given complex biological structure (an eye, say, or a hemoglobin molecule) could have evolved by natural processes. Such calculations invariably include a tiny number at the end, and from this number we are meant to conclude that evolution has been refuted.
We saw last time that all such arguments fail. There are two reasons for this. First, the probability of evolving a given biological structure over long periods of time is affected by so many immeasurable variables that there is no way of carrying out a meaningful calculation. Second, learning after the fact that something terribly improbable occurred provides no reason for inferring design.
But we also considered the possibility of enhancing our argument from improbability in the following way: While it is true that improbability by itself provides no reason for suspicion, it is possible that the combination of improbability with a clearly recognizable pattern does provide such a reason. Tossing one hundred heads in a row would make us suspicious in a way that tossing a random jumble of one hundred heads and tails would not, though the two sequences are equally improbable.
Consider another example. When we look at the face of a mountain we see a complex network of cracks and grooves cut into the rock. The particular network we see is no doubt terribly improbable. It has the shape it does only because of the long-term action of countless natural forces, and if any one of those forces had been slightly different we would have found ourselves confronted with a different network. Still, this by itself does not suggest intelligent design.
But now suppose the mountain we are examining is Mt. Rushmore. In this case the network of cracks and grooves is improbable in a very special way. For now we have an easily specifiable pattern; the cracks and grooves trace out an uncanny likeness of four United States Presidents. There is no way the mindless action of erosion and weathering could produce so detailed a likeness. Thus, the combination of improbability and specification seems to suggest intelligent design.
Improbability by itself does not suggest design, since any end result of a complex sequence of natural forces is surely very improbable. Specification by itself would likewise not indicate design, since very simple patterns result from natural forces all the time. But both together? That requires an explanation.
So far this is all very reasonable. After all, it really is obvious that Mt. Rushmore was the product of intelligent design. The tricky part comes when we attempt to apply this logic to evolution.
Let us return to the eye. As one endpoint of a four billion year evolutionary process, it was surely very unlikely that our particular eye would evolve. By itself that is not significant; improbable, yes, but something had to happen. But now consider that the eye is not some random conglomeration of molecules, but a finely honed machine that performs a definite function. Can we argue that, consequently, the eye must have been designed?
This argument figures prominently in the writings of ID proponents. Indeed, it is effectively their only scientific argument in favor of design (as opposed to against evolution). Their preferred example is the flagellum of E. coli. They argue that the flagellum is plainly improbable, and specified by its striking resemblance to the outboard motors humans use to power their boats. Consequently, the flagellum must have been designed.
But is it really so simple? There are several difficulties with this approach, but the most serious comes in distinguishing those patterns that indicate design from those that do not. You see, any event that occurs can, retroactively, be matched up with some arbitrarily determined pattern. Every person who has ever looked at a fluffy, cumulus cloud and fancied seeing a dragon is familiar with this phenomenon. In describing the flagellum as an outboard motor are we emulating the Mt. Rushmore example, or the cumulus cloud example? To make this sort of argument work, we must have some mechanism through which the genuine patterns can be distinguished from the phony ones.
To resolve this dilemma let us revisit the case of Mt. Rushmore. Previously we argued that it was the recognizability of the faces that indicated design, but that was not really correct. What suggests design is our prior experience with what mountains normally look like. There are mountains all over the world, and they all look more or less the same. It is this background experience that tells us that Mt. Rushmore is something strikingly different from the norm. We are skeptical that weathering and erosion can account for Mt. Rushmore because we have seen the effects of those forces on countless other mountains. Similarly, tossing one hundred heads in a row suggests a loaded coin not because it fits some easily describable pattern, but because we have a lot of experience with coins and know what to expect when we toss them.
That experience is precisely what is lacking in biological evolution. We have only one example of evolution to consider. We have no background experience that will allow us to say, “Usually, in the course of four billion years of evolution, we end up with nothing like the bacterial flagellum. Consequently, obtaining a flagellum in this case suggests design.” Absent this experience there is no way of distinguishing the design-suggesting patterns from the “something had to happen” patterns.
Proponents of ID have no basis for their claim that complex biological systems comprise patterns that suggest design. Can evolutionists do better from their end? Are there lines of evidence to suggest that the patterns we find in nature are precisely those we would expect from prolonged evolution by natural selection? Indeed there are, quite a few in fact. Allow me to mention just two.
The first comes from the structures of the systems themselves. Scientists have studied a great many complex biological systems, and in every case they find that, from an engineering standpoint, they make little sense. They appear to be cobbled together from many small modifications of simpler precursors, just as would be expected from a process of random variation sifted by natural selection. In this they differ markedly from the sorts of machines human designers build. In light of this, the vague analogy between, say, a flagellum and an outboard motor looks entirely too simplistic.
Most of the really important aspects of evolution by natural selection can be simulated on computers, and this leads to the second line of evidence. In recent years it has become routine for engineers to use genetic algorithms to solve practical design problems. The idea is to mimic the action of natural selection to find solutions that would evade even the most creative human engineers. The solutions found by such algorithms have much in common with their biological counterparts: they are frequently quite complex and functional, but also inefficient and vaguely Rube Goldberg-like. A closely related process occurs in artificial life experiments. Here, mutating computer programs play the role of biological organisms. They are selected for their ability to perform various tasks efficiently. It is routine in such experiments to observe the evolution of complex functionalities undreamed of by the human programmers who started the ball rolling. The patterns observed in such experiments are strikingly similar to those found in nature.
The conclusion is that anti-evolutionists have yet to devise an argument based on probability theory that has any merit at all. It is nearly certain they never will. Upon hearing a creationist mention probability in his argument, you can, in good conscience, ignore him.
The irony here is that the mathematical theory of probability is an indispensable tool for studying many aspects of evolution. Indeed, there are certain fundamental concepts in evolution that can only be understood via the language of probability theory.
Take “fitness” for example. In evolutionary terms organism A has higher fitness than organism B if A is likely to leave more offspring than B in the course of their lifetimes. We cannot say for certain that A will leave more offspring than B; after all, no matter how swift, strong, intelligent, and sexually attractive A is, it is still possible that he will be struck by lightning before reproducing. We can only say that A is likely to leave more offspring than B. Describing such likelihoods in a rigorous way is the job of probability theory.
Furthermore, evolution ultimately comes down to changes in gene frequencies, and genetics is a subject shot through with probability. The reason for this is not hard to spot. Your genes are a random sample of those possessed by your mother and father. Were you to catalog your parent’s genes and attempt to predict which of them would end up in their children, you would find the effort futile. You would guess right about half the time and wrong about half the time, just as various principles of probability theory suggest.
Let us push this a little farther. Suppose we zero in on a particular gene in a large population of organisms, one that comes in two forms that we will label A and a. Every organism in the population will possess two copies of this gene (one from each parent). Consequently, we can say that every individual in the population will be of type AA, Aa or aa. In principle we could count the total number of organisms in the population, and likewise count the total number of occurrences of each allele. In this manner we could compute the probability that a randomly chosen gene will be of a particular form (A or a). Let us denote by P the probability that a randomly chosen gene will be A, and let Q denote the corresponding probability of choosing an a.
Can we predict the values of P and Q in the next generation? Will the A allele appear with higher probability among the offspring of this generation, or with lower probability? Will the probability remain unchanged? And what about the a allele? Answering these questions would enable us to predict the short-term evolution of this population, at least with respect to this particular allele.
To find an answer, we begin by asking what sorts of environmental factors might affect these probabilities. Well, one possibility is that organisms of type AA prefer mating with their own type. This phenomenon is known as assortative mating, and to keep our model simple we will assume that it does not occur. More generally, we will assume that mating in our population is random with respect to this particular allele.
Another complicating factor is natural selection. If organisms possessing an A allele are more fit than those lacking it, then the frequency of the A allele will go up in the next generation. Again, to keep our model simple, we will disregard this possibility. That is, we will assume that neither allele has a selective advantage over the other.
Having made these assumptions, we conclude that the alleles present in the offspring will be a random sample of those present in the current generation. Basic probability theory is, therefore, sufficient for answering our question. It is easily shown that genotype AA will occur in the next generation with frequency P to the power of 2, genotype Aa will appear with frequency 2PQ, and genotype aa will appear with frequency Q to the power of 2. (This means that the probability that a randomly chosen individual from the second generation is of type AA will be P to the power of 2, and so on). This result is known as the Hardy-Weinberg law. The technical details of how this conclusion is obtained need not detain us here.
Though this model is very simple, it finds a surprising number of applications. There are many real-world situations in which the frequencies of the three genotypes can actually be measured. If the frequencies match those predicted by the Hardy-Weinberg law then biologists can conclude that the assumptions they made (specifically, random mating and no selection) are valid in that case. By contrast, if the frequencies differ markedly from the expected values, then there is strong evidence that one of the assumptions does not hold. That could be a revealing fact about a population, and one that would doubtless inspire further research.
Models of the sort I just described belong to a branch of biology known as population genetics. Scientists working in this area have devised far more sophisticated models than this for short-term gene flow in populations, and these models are studied using techniques from statistics, probability, and other branches of mathematics.
For our purposes, however, what is relevant is the relative modesty of what we have done. We began with reasonable assumptions that are known to be true in a great many situations. We then focused on a single gene that comes in a mere two forms. By so restricting ourselves we ensured that our model dealt solely with quantities that could actually be measured. Our goal in applying the model was not to make sweeping generalizations about what is possible and what is not during four billion years of evolution. Instead, we simply wanted to determine if certain plausible assumptions held in one given situation. For real scientists, mathematics is just one tool among many used to solve mundane problems that arise in quotidian scientific work. It is not a device for drawing grand metaphysical conclusions.
There is an important lesson in that. A mathematical model is only as reliable as the assumptions upon which it is based. If a mountain of biological evidence says that evolution happened, but a back of the envelope probability calculation says that evolution is impossible, then what you have is evidence that your calculation was based on faulty assumptions. But since creationists use mathematics primarily to create an illusion of scientific legitimacy, they find it easy to ignore such details.