Complex Specified Information Without an Intelligent Source

Meyer claims that specified complex information can only arise from an intelligent source, justifying that claim by citing a series of examples. One of those examples is computer code. In my previous post, I suggested that this was not an adequate example because of fundamental differences between computer code and DNA information. An obvious question is whether there is an example of specified complex information that is not derived from an intelligent source but solely from physical or chemical functionality. In this post I would like to offer just such an example.

The magnificent example of antibodies was presented by Dr. Craig Story in the December 2009 issue of Perspectives on Science and Christian Faith, Vol. 61, No. 4, p.221. (if you aren’t a member or don’t have a subscription, copies are available from the ASA office for $10 plus shipping and handling; contact In his article, Craig explains how the immune system works, focusing on the importance of the inherent randomness in the process. In this post, I would like to offer a physicist’s interpretation of his paper, with a focus on the information content. Craig has graciously reviewed these comments and corrected my errors in biology.

Stem cells in our bone marrow continuously produce a population of pre-B cells, so called because they are precursors to B cells, which manufacture antibodies when mature. These pre-B cells are all identical and have the same antibody gene DNA. This population therefore has a relatively low information content. All the complexity is within the cell and there is no diversity in the population of cells. As the pre-B cell population prepares to moves into the body, the cells undergo a transition into B cells. In the process, key segments of DNA in each cell are rearranged randomly to form a unique and novel DNA sequence. The process is described in detail in Craig’s paper. It is a constrained process so that the resulting antibody protein is always a particular folded configuration that may have affinity to an antigen, but the gene segments are randomly rearranged and joined to alter the magnitude of the affinity. The result is a population of B cells, each one of which is different in terms of its antibody DNA. This means that we have a transformation of a low information population of pre-B cells to a high information population of B cells, with reference to their antigen-binding abilities. The complexity has increased dramatically but we do not yet have specificity.

As a B cell moves through the body, it may or may not encounter an antigen with which it has affinity. If it does not, the B cell dies and that particular configuration no longer exists in the body. However, if an antigen appears with which a B cell has some degree of affinity, the B cell will attach to the antigen. In this case, that B cell will reproduce through cell division to create clones of itself. This process occurs throughout the population of B cells with the result that only B cells with some affinity to the environment of antigens survive. This is a basic level of specificity.

There is another level of specificity that Craig describes. A first-responder B cell usually will have a relatively small degree of affinity to an antigen. As this cell reproduces itself, an enzyme enhances the mutation rate of only the portion of the antibody genes that determines the affinity. In some cases, mutation rates can reach as much as one nucleotide per cell division. This means that the subpopulation of this particular B cell grows with a dynamic diversity of various degrees of affinity to that antigen. The cells with the strongest degree of affinity will preferentially attach to the antigens, leaving those with weaker affinity without antigens and therefore a death sentence. Over time, this subpopulation will be predominantly one with strong affinity to this particular antigen. This, in a nutshell, is why vaccines work.

In the bigger picture, this example shows how a homogeneous population of pre-B cells is transformed to a dynamically diverse population of B cells, with a tremendous increase in information content. This complex information then becomes highly specified by fine-tuning to match the antigens to which they are presented. The result is a high degree of specificity and complexity with no involvement of an intelligent designer as an immediate cause. This does not, of course, preclude the sustaining involvement of an Intelligent Designer at a metaphysical level.

Craig points out the critical role of randomness as a key characteristic of the cellular processes involved in the immune system. The random process of gene rearrangement is necessary to ensure a sufficiently broad range of binding specificities, such that some of them are almost sure to bind to one part of each pathogen. His example also illustrates clearly how highly complex and highly specified information is derived directly from a population of relatively low-information cells. Hence, the argument that Meyer makes that all complex specified information comes from an intelligent source does not withstand scrutiny.

The antibody example is a beautiful illustration of the basic processes of evolution. It begins with the common ancestry of the stem cells that produce an ancestral population of pre-B cells that are essentially identical. Descent with random variability occurs in the generation of the B cells, which are all unique with respect to their antibody gene DNA. Natural selection describes the way in which B cells that do not bind to an antigen will die while those that do bind to an antigen proceed to reproduce clones. The random variability of the dynamically diverse population of antibodies ensures the formation, within a short period of time, of antibodies with affinity to virtually any antigen. The subsequent way in which those B cells acquire stronger affinity to that antigen is a type of adaptation. Darwin suggested that these basic processes, operating over a long period of time, could account for the origin of species. Little did he suspect that these very processes are active continuously in our bodies on a relatively short time scale to provide a vital line of immunological defense.

240 comments to Complex Specified Information Without an Intelligent Source

  • Ide Trotter

    Jon, I beg to differ. I can’t claim responsibility for introducing “potential CSI” in the sense I believe you are trying to use it.  I believe that back on May 26 you inquired, “Isn’t there a difference between potential information and actual information (or potential complexity or specificity)?”  Prior to that Randy addressed the “’potential” or “capability of producing’ that information.”  To the extent that that “information” is CSI this thread is looking for just that. What has the potential to produce CSI other than intelligent agency?  Once there is lots of CSI around Randy says evolution can do it.  I keep suggesting that even if this is true it begs the question of what process produced the CSI Randy’s process had to start from. 
    I really don’t know what your “potential CSI” might be. Are you suggesting it is “information in a system” that is not CSI but somehow has the potential to turn into CSI?  In any case I don’t think it is what Randy and I and the thread are wrestling with.  I granted some feel that CSI might somehow  “emerge.” I argue that to make that notion have traction there must be some suggestion as to what CSI emerged from, how CSI emerged and how nature held off 11 billion years before pulling the trigger.  If we can answer, “what CSI emerged from, how CSI emerged” out of natural causes we will answered the question of “CSI Without an Intelligent Source.”
    Maybe you think our problem is that “the definition of the information being considered keeps shifting.” I think the problem is that we don’t have a clue how CSI, either CSID, only needing an improbable pattern but not limited to patterns only, or CSIM, functional, is produced by random process.  What difference does the definition make?  The CSI that is important is CSIM, the information that has the potential to make something happen.
    As to your “comment about the fractal algorithm that remains simple and yet generates complex structures without additional intelligent agency.”  Yes it does but however complex the structures may become they are not specified.  Any CSI involved is that embedded in the algorithm.

    • Randy Isaac

      Ide, sorry for the delay. This is a very busy week but I’ll do what I can.
      First, on your previous comment on Craig Venter. Perhaps you missed my earlier remark “Venter notwithstanding” or you missed Terry Gray’s post on our Voices blog. I would have thought you would have been the first to point out the pre-existing living cell!
      And as far as your question goes about Francis Collins agreeing with me, I would say absolutely yes. I have said repeatedly that we all believe in an Intelligent Designer who created all things, using the evolutionary processes to do so.

      I’m amazed how you seemed to remove all orthogonality in the details of our discussion but still cling to your conclusion. In any case, though you claimed to agree with my simplified text example, I think you didn’t grasp the significance. Let me explain. Here’s the set of five text examples again:

      1. TAHT
      3. NIEM  THAT KEAM

      Much confusion is caused by conflating three different types of information. I think you understand some of it.
      1. The technical definition of information: the log of the number of possible states, or the log of the inverse of the probability of a system being in a particular state. Clearly by that measure, item 1 is the simplest and the other four have twice as much information and are all equal in the amount of information. This answers the question “how much information is inherently in the system?”
      2. Kolmogorov information deals with compressibility. It answers the question “What is the minimum amount of information required to express a particular state of the system?” By this measure, item 2 is a repetitive text and can therefore be expressed with less information than items 3, 4, and 5. Note that this amount of information can change dramatically when any character is changed, even though the total information in the system stays the same. I would suggest that your algorithm discussion belongs here. If a fractal display can be expressed in a simple algorithm, that’s an example of simplifying a set of information. It is Komogorov information. By the way, if such a fractal generating algorithm involves a random number then it is not reproducible and the full result cannot be reduced to the simpler algorithm. Again, note the analogy to the antibody example. The randomness in the rearrangement of the DNA guarantees that the algorithm in the initial population does not contain the information that is found in the final set.
      3. Specificity answers the question “What is the significance of the particular state of the system?” Meyer points out that this means it maps to some independent pattern. In other words, it is not a quantitative entity. It is a relationship. It is relative and involves a comparison with some reference. In this case, it is a dictionary. Which dictionary do we choose? The degree of specificity changes from 3 to 4 to 5 depending on what language you agree on. If it is English, then the words in item 4 belong in the dictionary but they don’t make sense together. Item 5 makes a little more sense, so specificity increases. It is fundamentally relative. Applying this to the effect of the antigen on the antibodies, it should be clear that there isn’t an external quantity of CSI to be brought in. It is simply a physical functionality whereby the B cells produce antibodies that match an antigen and they thrive. With, as Chuck points out, the aid of T cells and other things that the biologists know a lot about.

      Note that this discussion is entirely independent of how the text came into being. It may have come from monkeys typing on a typewriter or a random character generator or whatever. The above characterization is still the same. No process need  be known–it applies to all possible processes.

      I think it is important to understand that specificity is not a quantitative entity that can be “imported”. Meyer claims there are two types of specificity. The matching of an independent pattern involves abstraction which can only be done by an intelligent agent. His second type is functionality. I argue that this does not require intelligence. I think the text example illustrates that. Language is inherently abstract and the only way to ascertain whether the text meets a dictionary definition is by an intelligent agent. There is no physical function (other than something created by an intelligent agent) that can make the distinction. On the other hand, physical functionality can be distinguished by non-abstract means–the success of its survival, for example–and therefore does not require intelligence.

      In summary, the idea of CSI being imported or somehow brought in is not a valid perspective.

      You suggested that “Now, can’t we at least defer the antibody example?  This thread is about CSI without an intelligent source.  Just that, pure and simple”
      I would suggest, however, that we do not have the option of ignoring the antibody example. It is real experimental, observational data and is the core of this thread. CSI can be ignored because it is a conceptual device without any physical existence. It has some convenient aspects to it and is somewhat useful but it can also get confusing when someone tries to suggest a “conservation of CSI” principle which simply doesn’t exist. There is nothing to conserve. There are relationships which intelligent agents can generate and perceive but those relationships are not conserved quantities.

      The real point about the antibody example is that it shows how evolutionary processes can indeed generate, without any (physical) intelligent agent, the kind of change that a supposed  “conservation of information” principle would say is impossible.

      Does that adequately confuse things further??


  • Charles Austerberry

    I wish Ide understood and/or valued what Randy was trying to communicate with his text character string example from May 22, but patience would be required for the step-by-step approach Randy was attempting to initiate.
    Jon, Randy, and Ide have an interesting discussion going here recently.
    To further explore the original example (antibodies) that Randy used to start this thread, a similar thread has begun at

  • Ide Trotter

    First let me compliment Jon and Randy on their May 29 and 30 exchanges.  I found them most helpful.  And Chuck makes a reasonable request in saying. “I wish Ide understood and/or valued what Randy was trying to communicate with his text character string example from May 22.”
    So to make the discussion easier to follow here is Randy’s from May 22
    “May I ask you to consider the CSI from a state point of view and not a process point of view first, just to clarify the situation. Let’s put all “processes” in a black box. Now consider the following series of text:
    1. TAHT
    Again, we know nothing of the processes that might lead from one step to another. Analyzing just those five different steps, what do we learn? From 1 to 2 there is an increase in total information from a physical complexity point of view but not much in significance. There is also an increase in information (at least from a Kolmogorov perspective) in going from 2 to 3. Going from 3 to 4 is an increase in specificity since now the collection of letters conforms to some in a dictionary. And going from 3 to 4 is another increase in specificity since it corresponds to some meaning and signficance. Of course all of this specificity requires a reference to an external information source such as a dictionary and a knowledge of the English language. But, however it happened, there is an increase in CSI, correct?
    By exactly the same analysis, going from pre-B cells to antibodies is an increase in CSI. Can we agree on that?”
    I apologize for not making it clear that I completely agreed with Randy’s illustration of increases in both raw information and complexity.  I focused on the first sentence of the paragraph and stand by my statement “We both know EXACTLY the process that DID both initiate the first step and lead on.  You typed it into your computer.  That seems to me a perfect example demonstrating the validity of the ID inference.  So far the only known source of CSI is intelligent agency.”
    But I still can’t see how the analogy helps much with the assertion that “pre-B cells to antibodies is an increase in CSI.”  It still seems to me there is just too much CSI sloshing around in that example to make a strong case that what we see in the undeniable antibody production process is truly an increase in “global” CSI.
    Possibly someone can come at that from a slightly different tack to convince me that external CSI could not have been involved in the antibody production process.
    Carry on,

  • Charles Austerberry

    Thanks, Ide, for giving Randy’s example another chance.  Just to clarify, Ide.  I previously took your position to mean that whether CSI increases or not in the B cells, because the B cells start with enormous CSI in the context of the vertebrate immune system, any non-designed (naturally caused) increase in CSI through the B cell maturation process would be relatively trivial.  But now (perhaps because I re-read Dembski’s paper) I think I hear you saying something else.  I think you are assuming the truth of Dembski’s Law of Conservation of Information.  In other words, if CSI increases in the population of B cells through a natural process, CSI must decrease by an equal amount somewhere else in the “system” as a whole.  And, since we cannot measure all the changes in CSI, the system is “intractable.”  Hence, you focus on the very beginnings of the universe and on the very beginnings of life on earth, when the form and amount of CSI was simpler and measurable, at least in theory.  Am I more correctly understanding your position now?

  • Ide Trotter

    Yes Chuck, You’ve put it just the way I see it.  I think it is easier to get our thinking around the issue by a “focus on the very beginnings of the universe” and probably for 10 or so billion years after. But I don’t think it necessary to go all the way to the “very beginnings of life on earth, when the form and amount of CSI was simpler and measurable, at least in theory.”  As I believe I have stated before you can introduce lots of specified information/structure into a protein-like molecule and still not have any biological function.  So I’m stopping well short of life.
    My reluctance to buy the antibody demonstration is not so much, “because the B cells start with enormous CSI in the context of the vertebrate immune system.”  Going back to Randy’s view that, “we have a transformation of a low information population of pre-B cells to a high information population of B cells,” I accept that the Pre B and B cell populations just are what they are.  My feeling is that any information increase between the two is, or at least it needs to be demonstrated that it isn’t, due to what you might consider an intelligent selection function exercised by the CSI rich environment.
    As to conservation of information, I haven’t looked at it for a while but my recollection is that it only applies to closed systems.  If input from outside occurs it could bring CSI into the system.
    Hope that helps,

  • Charles Austerberry

    Yes it helps … thanks.
    If the change in the B cell population is an example of “an intelligent selection function exercised by the CSI-rich environment”  might not the same be true for the change in (the evolution of) the CSI-rich environment (the vertebrate immune system) in which the B cells mature?  And might not the same be said for the evolution of the CSI-rich pre-vertebrate  environment in which vertebrates evolved?   If I understand, your approach is more akin to evolutionary creation/theistic evolution than to Michael Behe’s notion of irreducible complexity.  Where Behe sees a limit or edge to evolution (so new CSI must be introduced by an intelligent designer for each significant novel function to evolve), perhaps your rich “sea” full of CSI previously provided by an intelligent designer allows novel functions to evolve via random mutation and natural selection?

  • Ide Trotter

    Good….you’re welcome, Chuck.
    Indeed the same might be said once the CSI rich environment is in place.  But how did a CSI rich environment get there in the first place?
    As to irreducible complexity, I believe that is a case yet to be fully made.  However the evidence is stacking up that there is an edge to evolution’s capacity to produce viable change.  Behe’s “The Edge of Evolution” takes sort of a top down look that seems rather compelling.  From the bottom up the work of Axe on the distribution of biological function in sequence space and the work of Ralph Seelke and Richard Sternberg showing evolution prefers to take an energy conserving but function-reducing detour before it can find a two-step prefabricated adaptive path also seem to be supporting the case.
    To expand on my view let me return to the question about the source of the first additional increments of CSI beyond the big bang budget. Let’s just say we give up on that and accept what I argue is the most probable inference based on current knowledge.  Intelligent agency might have been involved at that step.  Unless the first going whatever, protein or you name it, is imbued with CSI production capability we are again faced with the most probable source of the next increment of CSI being intelligent agency.  And so forth. At some point that process is no longer the most probable inference since it has been established that a CSI rich environment can do just what Randy argues for it.
    What do you think?

  • Charles Austerberry

    Ide, I note that on May 3rd 2010 you wrote at

    “I don’t see origin as a TE/EC vs. ID issue at all.  To my way of thinking TE/EC considerations don’t come into play until after the first replicator is somehow established.  However this is not to say that information source questions do not come into play as the TE/EC mechanism, whatever it may be, progresses to higher levels of complexity. Therein may lie the resolution of the question I can’t get out of my head. On what a priori basis can one differentiate between common descent and common design?”
    I assumed you wanted to know how to distinguish common descent from separate creations of species by a common designer.  So, in my recent reply on BioLogos I suggested human chromosome #2 compared to syntenic chromosomes in great apes.  This is discussed in many places, including: –>
    Certainly one could hold (not on a scientific basis, but from a faith perspective) that the fusion which created human chromosome #2 was designed. In any case, the human chromosome #2 structure provides powerful evidence for common ancestry, whether or not one views the history of human evolution to have been designed.  Related evidence comes from pseudogenes, which are discussed in a couple of threads at BioLogos:
    But now I wonder if in your question on BioLogos you were asking how to distinguish designed from non-designed common ancestry?  That might fit better with your reference to the TE/EC vs. ID distinction, because with the version of ID theory that accepts common ancestry, perhaps the only distinction between TE/EC and ID is whether or not the design is a scientific conclusion.  Whether TE/EC or ID, we all believe in a Creator, and what we all see through science is compatible with that belief.
    So to summarize, here is how I now understand your position (and I take it that to you, the following are scientific conclusions, which puts you in the ID rather than TE/EC camp):
    1) Before the first replicator (life) began, there was already CSI provided by an intelligent designer, but not enough CSI to get life started.
    2) Another large input of CSI was provided by an intelligent designer to start life.
    3) Once life began and its CSI was naturally replicating (“increasing” in a sense, but not in the sense of truly new net CSI being generated), soon so much CSI was “sloshing around” that some of it could be rearranged to give new functions (such as functional populations of B cells), organisms, etc. But unless new CSI comes into the system from an intelligent designer (who may be outside the system), random mutation and natural selection can only reshuffle existing CSI.  As Dembski asserted, neither random mutation, nor natural selection, nor a combination of the two, can generate new net CSI.

    Again, corrections and clarifications of my understanding of your position would be appreciated.

  • Ide Trotter

    Thanks for a very thoughtful and helpful piece.  Let we work my way from the bottom up.  I agree with your statement of my position on 1) and 2) but I don’t see 3) quite that way.  To my way of thinking all the first replicator could do would be to replicate more informationally identical replicators.  It seems to me that until some mechanism for the production of CSI by random processes is identified additional increments of CSI will have to be introduced by intelligent agency.  Only after enough of these steps do we get to an abundance of CSI “sloshing around.” By the way the technical elegance of that term says something about the rigor of my scientific thinking.  Perhaps the most honest way to refer to my view is see it as an ID “just so story.”  But until the random process for producing CSI is found I’m prepared to argue that it is the “just so story” most likely to stand up over time given what we think we understand about scientific processes today.  To use a rough paraphrase of Churchill’s famous statement about democracy, “It’s the least scientifically supportable “just so story” except for all the others.” 
    To my way of thinking all the other “just so stories” are predicated on the assumption that further scientific advances will find a way.  That is certainly arguable but the difficulty of mounting the probabilistic barrier posed by the challenge of producing functional CSI by random processes makes it a faint hope in my opinion.  So I continue to ask myself, “If TEs seem willing to grant a divine hand may have been involved in initiating the causal flow of nature why do they find subsequent involvement so out of the question?”
    Now to the top.  You are right that I “wanted to know how to distinguish common descent from separate creations of species by a common designer.”  I’m considerably out of my depth on this but let me take a crack at understanding what you are saying.  If it is the case that “the fusion which created human chromosome #2” is the single informational difference separating humans from higher apes I would agree that random processes could indeed have produced the speciation event.  However, I think you will agree there are many, many information differences between these species.  Unless adaptive single step paths such as this fusion step are found which can be visualized to produce viable intermediates I don’t think you’ve made the case.  So I would be prepared to argue that common design still makes as much sense as the “just so story” of choice as common descent.
    Does that make any sense to you?

  • Charles Austerberry

    Dear Ide:
    Thanks for your post.
    The first two paragraphs are clear and easy for me to understand.  You conclude those with “So I continue to ask myself, ‘If TEs seem willing to grant a divine hand may have been involved in initiating the causal flow of nature why do they find subsequent involvement so out of the question?'”
    I just want to note that many TEs find subsequent involvement of a divine hand very much “in”, not “out” of the question.  The only essential difference between TE and ID is that the TE perspective does not view the involvement of an unidentified, unconstrained, and potentially omnipotent designer to be detectable or demonstrable through science.  ID theory, as I understand it, claims that design can be scientifically detected or demonstrated without identifying, characterizing, or assuming anything about the designer (not even that the designer is subject to the natural laws of the universe).  For technical but very practical and (IMO) possibly inevitable reasons, the ID project has not yet succeeded, and maybe never can succeed. But again, if you read the TE literature, you will find all sorts of proposals about how divine action works, throughout time as well as “beyond” time, and certainly not limited to “initiating the causal flow of nature.”  The only essential difference between TEs and IDs is that TEs think science is equipped to deal only with that causal flow of nature.  Thus, in cases of design, only designs linked to designers (even if unidentified) which can be investigated via the natural sciences can be detected or demonstrated scientifically.  Again, the TE scientist doing her or his work may well have a philosophy, worldview, or faith that includes a divine Creator virtually identical to the divine Creator in which an ID scientist also trusts.  I qualify that a bit (“virtually identical”) because I think there are some relatively minor theological differences that tend to correlate with TE vs. ID, though I’m not sure any are essential.
    Regarding your last paragraph, I just want to clarify that I do not think the chromosome #2 fusion was “the single informational difference separating humans from higher ape,” and probably you don’t either.  All I ask is that you continue learning about chromosome synteny and pseudogenes while keeping this in mind: evidence for the common ancestry of humans and apes does NOT equal the evidence for why humans and apes differ functionally and are separate species.  Those are separate issues.
    Best wishes,

February 2010
« Jan   Mar »