Complex Specified Information Without an Intelligent Source

Meyer claims that specified complex information can only arise from an intelligent source, justifying that claim by citing a series of examples. One of those examples is computer code. In my previous post, I suggested that this was not an adequate example because of fundamental differences between computer code and DNA information. An obvious question is whether there is an example of specified complex information that is not derived from an intelligent source but solely from physical or chemical functionality. In this post I would like to offer just such an example.

The magnificent example of antibodies was presented by Dr. Craig Story in the December 2009 issue of Perspectives on Science and Christian Faith, Vol. 61, No. 4, p.221. (if you aren’t a member or don’t have a subscription, copies are available from the ASA office for $10 plus shipping and handling; contact asa@asa3.org.) In his article, Craig explains how the immune system works, focusing on the importance of the inherent randomness in the process. In this post, I would like to offer a physicist’s interpretation of his paper, with a focus on the information content. Craig has graciously reviewed these comments and corrected my errors in biology.

Stem cells in our bone marrow continuously produce a population of pre-B cells, so called because they are precursors to B cells, which manufacture antibodies when mature. These pre-B cells are all identical and have the same antibody gene DNA. This population therefore has a relatively low information content. All the complexity is within the cell and there is no diversity in the population of cells. As the pre-B cell population prepares to moves into the body, the cells undergo a transition into B cells. In the process, key segments of DNA in each cell are rearranged randomly to form a unique and novel DNA sequence. The process is described in detail in Craig’s paper. It is a constrained process so that the resulting antibody protein is always a particular folded configuration that may have affinity to an antigen, but the gene segments are randomly rearranged and joined to alter the magnitude of the affinity. The result is a population of B cells, each one of which is different in terms of its antibody DNA. This means that we have a transformation of a low information population of pre-B cells to a high information population of B cells, with reference to their antigen-binding abilities. The complexity has increased dramatically but we do not yet have specificity.

As a B cell moves through the body, it may or may not encounter an antigen with which it has affinity. If it does not, the B cell dies and that particular configuration no longer exists in the body. However, if an antigen appears with which a B cell has some degree of affinity, the B cell will attach to the antigen. In this case, that B cell will reproduce through cell division to create clones of itself. This process occurs throughout the population of B cells with the result that only B cells with some affinity to the environment of antigens survive. This is a basic level of specificity.

There is another level of specificity that Craig describes. A first-responder B cell usually will have a relatively small degree of affinity to an antigen. As this cell reproduces itself, an enzyme enhances the mutation rate of only the portion of the antibody genes that determines the affinity. In some cases, mutation rates can reach as much as one nucleotide per cell division. This means that the subpopulation of this particular B cell grows with a dynamic diversity of various degrees of affinity to that antigen. The cells with the strongest degree of affinity will preferentially attach to the antigens, leaving those with weaker affinity without antigens and therefore a death sentence. Over time, this subpopulation will be predominantly one with strong affinity to this particular antigen. This, in a nutshell, is why vaccines work.

In the bigger picture, this example shows how a homogeneous population of pre-B cells is transformed to a dynamically diverse population of B cells, with a tremendous increase in information content. This complex information then becomes highly specified by fine-tuning to match the antigens to which they are presented. The result is a high degree of specificity and complexity with no involvement of an intelligent designer as an immediate cause. This does not, of course, preclude the sustaining involvement of an Intelligent Designer at a metaphysical level.

Craig points out the critical role of randomness as a key characteristic of the cellular processes involved in the immune system. The random process of gene rearrangement is necessary to ensure a sufficiently broad range of binding specificities, such that some of them are almost sure to bind to one part of each pathogen. His example also illustrates clearly how highly complex and highly specified information is derived directly from a population of relatively low-information cells. Hence, the argument that Meyer makes that all complex specified information comes from an intelligent source does not withstand scrutiny.

The antibody example is a beautiful illustration of the basic processes of evolution. It begins with the common ancestry of the stem cells that produce an ancestral population of pre-B cells that are essentially identical. Descent with random variability occurs in the generation of the B cells, which are all unique with respect to their antibody gene DNA. Natural selection describes the way in which B cells that do not bind to an antigen will die while those that do bind to an antigen proceed to reproduce clones. The random variability of the dynamically diverse population of antibodies ensures the formation, within a short period of time, of antibodies with affinity to virtually any antigen. The subsequent way in which those B cells acquire stronger affinity to that antigen is a type of adaptation. Darwin suggested that these basic processes, operating over a long period of time, could account for the origin of species. Little did he suspect that these very processes are active continuously in our bodies on a relatively short time scale to provide a vital line of immunological defense.

240 comments to Complex Specified Information Without an Intelligent Source

  • Charles Austerberry

    Here are a couple of articles on mineral catalysis of RNA oligomer synthesis:
    http://pubs.acs.org/doi/abs/10.1021/ja061782k
    http://pubs.acs.org/doi/abs/10.1021/ja9036516
     

  • Charles Austerberry

    Ide, I’m not sure why the articles in the journal RNA are not accessible – when I go to the journal’s site, it says they are free – but in case that’s not the case when you go to the site, below are links to two articles (one new, one you saw the abstract for) from the journal as they are served up on PubMed Central, which I’m sure is free to all.  Best wishes!
    Chuck
    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2779684/
    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2673073/

  • Charles Austerberry

    Ide, in case you cannot access the JACS articles I cited regarding clay mineral catalysis, here’s one that is free on PubMed Central:
    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2857174/
    Cheers!
    Chuck

  • Ide Trotter

    Thanks Chuck,
     
    As far as I can tell these later articles all relate to what can be done by intelligent laboratory work on preexistent RNA.  The parent reference appears to be Bartel DP and Szostak JW. ·    Science. 1993 Sep 10;261(5127):1402-3. To me the title is the give away, “Isolation of new ribozymes from a large pool of random sequences.” And the first sentence of the abstract makes the problem clear. “An iterative in vitro selection procedure was used to isolate a new class of catalytic RNAs (ribozymes) from a large pool of random-sequence RNA molecules.”  The starting point was a “large pool” of preexistent “random-sequence RNA molecules.”  It is exactly these sequences that Shapiro has shown quite convincingly are most unlikely to occur in nature.  See Sci Am, p47, June, 2007. It also appears that this referenced “large pool” must have been based on laboratory preparations from previously purified, non-racemic nucleotides.
     
    As for the Szostak, Sci Am, p  54, Sept., 2009, I found my old annotated copy.  I liked the article because after the first four paragraphs which concluded “explaining how life began entails a serious paradox: it seems that it takes proteins-as well as the information now stored in DNA-to make proteins.”  It than moved to speculate on schemes as to how “the paradox would disappear if….”  Beyond that I underlined an extensive sequence of “would,” “suggest,” “may”,  “could,” “fortuitous,’ ‘seems likely,” ‘could,” ‘could,’ ‘appears’…..before I even get to the end of the second column.
     
    Don’t get me wrong.  By simply recapitulating the state of OOL research by it’s foremost practitioners I’m certainly not suggesting that no progress will be made exploring every conceivable scenario.  But I argue that given the present state of knowledge in the face of the formidable problems acknowledged for almost every step along possible natural paths makes it rather difficult to make the case that intelligent agency need not be seriously considered.  And to argue that research will stop should that be acknowledged is nonsense, IMHO.
     
    Now since the Szostak article you referenced acknowledges the paradox of “information now stored in DNA” I wonder if we can return again to my simple challenge.  Can we think of a research program to produce an initial 10 characters of code by a natural step-by-step process or Randy’s unknown process to accomplish this by an all at once “condensation?”  Until one of those programs makes significant progress it seems to me that intelligent agency remains the most plausible explanation. Thankfully, Venter has now demonstrated that intelligent agency is up to the task.
     
    Carry on,
     
    Ide

    • Charles Austerberry

      Dear Ide:
      Thanks as always.  And again, I’d prefer that you prioritize on the conversation with Jon (most recently, and Randy previously), because that one is making remarkable progress, in my opinion.  But to the extent that current OOL experiments might be relevent, I’ll continue our conversation as a sideline.
      According to Briones et al. (2009) “up to now, no RNA polymerase activity has been found among the short RNA sequences that can be generated by nonenzymatic random polymerization. Indeed, a minimum size of about 165 nucleotides (nt) has been experimentally established for such a template-dependent RNA polymerase molecule (Johnston et al. 2001; Joyce 2004), a length three to four times that of the longest RNA oligomers obtained by random polymerization of activated ribonucleotides on clay mineral surfaces (Huang and Ferris 2003, 2006).”
      So, Briones et al. (2009) go on to explain that much current work does not assume that the first replicating system consisted of a single self-replicating RNA polymerase molecule.  Rather, the work now seems to focus on interacting short cross-replicating RNAs, which may have started with ligase activity and only later developed polymerase activity.
      Joyce’s latest that I found was:  Lincoln & Joyce (2009) Self-sustained replication of an RNA enzyme. Science 323:1229–1232. Abstract/FREE Full Text
      In that article, Lincoln & Joyce claim  “A long-standing research goal has been to devise a nonbiological system that undergoes replication in a self-sustained manner, brought about by enzymatic machinery that is part of the system being replicated. One way to realize this goal, inspired by the notion of primitive RNA-based life, would be for an RNA enzyme to catalyze the replication of RNA molecules, including the RNA enzyme itself.  This has now been achieved in a cross-catalytic system involving two RNA enzymes that catalyze each other’s synthesis from a total of four component substrates.”  Their abstract begins: “An RNA enzyme that catalyzes the RNA-templated joining of RNA was converted to a format whereby two enzymes catalyze each other’s synthesis from a total of four oligonucleotide substrates. These cross-replicating RNA enzymes undergo self-sustained exponential amplification in the absence of proteins or other biological materials. Amplification occurs with a doubling time of about 1 hour and can be continued indefinitely.”
      I’m wondering if we can integrate your “10 characters of code” model with these two papers from 2009 by Briones et al. (2009) and Lincoln & Joyce (2009).
      Cheers!
      Chuck

  • Ide Trotter

    Chuck,

    This is extremely helpful. I would think there might be a way to “integrate your “10 characters of code” model with these two papers from 2009 by Briones et al. (2009) and Lincoln & Joyce (2009).” I had visualized some sort of a step-by-step polymerization process. That seems pretty much like what is reported in Briones et al. (2009) “up to now, no RNA polymerase activity has been found among the short RNA sequences that can be generated by nonenzymatic random polymerization.”

    Fortunately this is a paper that opened for me. On a quick scan I could not tell whether these papers start from racemic mixtures or not. Do you know? Would it be your understanding that that if functionality, polymerase activity in this case, is found it could be seen as one of the “islands” of functionality in Axe’s vast ocean of sequence space? Would you anticipate that such “functionality” would be novel or a synthesis of functionality already found in nature. I really appreciate your insight. This is opening up new areas for me.

    Ide

    • Charles Austerberry

      Dear Ide:
      Apparently, Lincoln and Joyce bought their nucleotides from Sigma! In other words, though clay minerals can select for chirally-correct enantiomers from racemic mixtures, Lincoln and Joyce used pure homochiral nucleotides.  I don’t know whether having the incorrect enantiomers present would have affected the ligase activity or not.  As I understand it, for example, when linear sugars close to form rings in nature, the alpha and beta conformers both form in equal amounts.  Amylose (a starch) and cellulose are both linear 1,4 chains of glucose, but the enzyme that makes amylose selects the alpha form and the enzyme that makes cellulose selects the beta form.  Whether anything like that selectivity is present in these ribozymes, I can’t say.
      And yes, I think Axe’s concept (shared by many others) regarding protein sequence space could be applied to RNA sequence space.  Of course, the numbers one must use to calculate how big the sequence space is, how many and how big the islands of functionality are, how much of the space has been explored during the history of life on earth, how likely or easy it is for evolution to jump from island to island, etc.  are all quite tentative.  Different assumptions lead to very different conclusions, it seems, judging from the following paper:
      http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2459213/
      Cheers!
      Chuck

  • Randy Isaac

    Thanks for continuing the conversation while I was away and attending to some other work. Chuck, thanks for the many excellent references. I would also highly recommend the set of lectures by Robert Hazen, “Origins of Life” available through The Teaching Company. Those lectures are a great insight into how science works–the good, the bad, and the ugly–as well as into how to think about research into the origins of life.

    Ide, you get the blue ribbon award for tenacity. You really want the origin of that 10 nucleotide sequence to be spelled out. Let me try another method to illustrate why I think it is an inappropriate and irrelevant question. You need not agree but I hope you can at least understand why I think so. Let me try an analogy, which isn’t perfect but I hope will be close enough to make the point.

    Suppose you are interviewing bricklayers to build a brick wall for you, one that contains some elaborate features, including some arches. You want to be sure this bricklayer is capable of doing a good job. So you ask him (or her?) to demonstrate his ability by laying the ten bricks at the top of the arch, nothing else. He objects and says he can’t do that until he’s laid a whole lot of other bricks and the rest of the arch is in place. No, you insist, that these 10 bricks are clearly an essential part of the wall and the process doesn’t matter. These 10 have to be in place so just show me how you do it. The poor bricklayer will quickly leave with furtive glances back to make sure you aren’t following him.

    The analogy is weak since we have a good idea of how to build the infrastructure to get those 10 bricks in place. But the point is that your 10 nucleotide sequence is not the starting point in the origin of life, nor is it something that occurs independently of many other processes going on. We know that much, even though we don’t know many of the details. Your insistence that the process doesn’t matter and we just have to figure out how that sequence occurred is as valid as asking the bricklayer to simply put those 10 bricks in place without anything else.

    As for Venter’s work, I’m still astonished that you were not jesting? Surely you understand that he and his team took an interesting step forward in genetic engineering but nothing particularly substantive in creating a synthetic cell from scratch. Oh yes, by encoding their names and other messages in the DNA, he certainly has injected an “intelligent agent only” sequence into that bacteria. That’s actually a good lesson in the difference between functionality and specificity due to a matching pattern. The specificity of those encoded sequences can only be determined by comparing with known names or words in a language. Those are symbolic and not physical. There is no impact on functionality. The rest of the sequence that relates to functionality has to be preserved. Its functionality is tested by whether or not it survives in the reproduction process. No symbolic language knowledg is necessary. That means these aspects are amenable to natural selection.

    It seems pretty clear that even if you were to argue that genetic engineering is enough of an indicator that an intelligent agent can create life, one would also have to admit that intelligence is not sufficient. There’s quite a bit of ancillary biotechnology capability required as well, which has not existed in the past. An intelligent agent as an explanation for the origin of life looks rather doubtful at best. It looks pretty clear to me that reproduction with variation plus selection is a robust process that has demonstrated the best potential explanation for the origin of life. It will be interesting to see how the research plays out.

    Randy

  • Ide Trotter

    Welcome back Randy,
     
    You are absolutely right that I don’t think your analogy supports your view.  Indeed it supports mine. You are right that stacking bricks vertically would not be a meaningful test.  However, if given a proper test the bricks would be added one at time.  You may not appreciate my alternative for the analogy but let me observe that the bricklayer would not toss bricks in the air along with some mortar and expect them to fall in a wall.  Some things have to be done in sequence.
     
    Now, what really surprised me was your statement that, “I think it is an inappropriate and irrelevant question.”  Forgive me but that sounds like a step toward the very science stopping that you and others ascribe to a commitment to ID.  There may be dumb and less promising questions but I was not aware that there were questions that are a priori off limits to science.
     
    Forgive me but I thought I had made myself quite clear that I don’t see Venter’s work as creating a synthetic cell from scratch. Highjacked is not the same as created in my view. As to your arguments about the non-functionality of the addition of the watermark to Ventner’s DNA, I have not yet seen that established.  Given the growing indications of functionality for non-coding DNA and complex interactions with other features of the cell there may turn out to be a deleterious function.  Until we have seen the competitive outcome of appropriate cultures containing the original and the high jacked cell we won’t have that answer.
     
    I find your last paragraph a headscratcher.  Please elaborate on why you assert that intelligence working with the materials of nature can’t possibly do what you want me to believe the materials of nature can do by themselves.  The only way that argument could hold would be for you to prove that human intelligence is so defective it is bound to screw up a process that could have done the job if intelligence had not intervened.   Are you possibly making a variant argument for the effect of original sin? J
     
    Carry on,
     
    Ide

  • Ide Trotter

    Jon,
     
    My sincere apologies.  Yours was a very thoughtful, lengthy and involved contribution.  Don’t know how I missed it.  There is more than I have time to address as I’m wrapping things up to get out of town.  So let’s start with my agreement that we are still struggling to get together on definitions. Let’s see what progress we can make anyway.
     
    When you say, “I think, based on my understanding of your past statements about things such as fractal algorithms, that you will argue there is a lot of CSI present in this system before the first bit ever gets transmitted, and therefore the total amount of information in the system (and/or its complexity, and/or specificity) hasn’t really increased,” we are in agreement with regard to specificity. However, I would argue both information and complexity increase in the fractal example but the only specificity involved would be that imbedded in the original fractal algorithm and it does not increase.
     
    I hope this takes care of your concern that, “I think this effectively nullifies the value of information theory.”
     
    In one sense of “information theory,” Shannon information, I would agree with you that, “information theory in this context is that data streams can contain more or less information.”  However, I’m not sure how you can argue, “and complexity (increases), simply as a consequence of the length of the sequence.”  As I see it complexity may or may not increase and it does in the fractal example.  It does not if a series of alternating 010101 bits is extended.
     
     If we are together this far would you like to suggest any modification of your summary defining information, complexity and specificity?
     
    As to the biological examples, I’m involved in another discussion that seems to be bogging down as we wrestle with the same issues.  I’m thinking of suggesting a two point stipulation to let us push the biological issues a bit farther down the road.
     
    1)     Biological systems have been shown to increase both information and complexity.  Whether or not this increase leads to an increase in specificity needs additional discussion and a better mutual understanding of exactly what is entailed by specificity before we take that up.
    2)     Intelligent agency is able to both produce and increase specified information whatever we may eventually settle on for the definition(s).
     
    How would you react to that?
     
    Again, I’m very sorry I missed your comment earlier.
     
    Ide

    • Jon Tandy

      Ide,

      I’ll be interested to read any further reply once you have a chance to digest my comments from last week. But here are a few quick answers.

      You disagree with me on whether specificity can increase in my example of data transmission. This is where specificity is a tricky concept. As Randy has pointed out, the concept of specificity has been added to normal information theory by ID advocates. They are adding to information theory a concept that suits their purposes, but arguably it may not be a concept that can be definitively agreed upon, and therefore may have limited scientific usefulness. In any case, we do have to be clear about the two types of specificity (functional and abstract), and how they may be represented in biological versus non-biological examples.

      I also think I differ with your view on how the increasing information and complexity, but fixed specificity, helps preserve your idea of CSI. You seem to be making specificity the dominant concept, so that information and complexity could theoretically increase without limit, but the “CSI” in the system stays the same because the specificity is preserved. Is this what you’re saying?

      In terms of complexity increasing, I’ve already partly anticipated your argument about repeating patterns. I mentioned a sequence with all 1’s or all 0’s might not increase in complexity, but you’re right that a repeating mixed sequence would probably retain the same complexity as the amount of information increases. However, I phrased the argument on the assumption that it was a random or pseudo-random generator. Even if only 1% of the instances of a repeating sequence contain a slight variation, it is no longer a uniformly repeating sequence, and the complexity does increase. (Hence my argument that when the information consists of DNA sequences, there are naturally occurring causes for the sequence to mutate, and thus the information becomes more complex over time through natural reproduction processes.)

      We do agree that biological systems can increase in both information and complexity. We also agree that intelligent agency can produce specificity. To be more clear, humans generate specificity of the “abstract” kind, by definition. Humans can also generate sets of data that could arguably be called “functionally specified”. However, this is the difficult part. I think with just about any example you come up with that involves humans, someone will ultimately trace it back to something abstractly specified by human intelligence. But there is a fundamental difference when we are talking about bacteria or antibody cells, or other non-human-intelligent systems. This is precisely where ID claims to make an analogy from intelligent human systems, but it is one of the very points of contention in their argument.

      The question is not whether humans can generate information, complexity, or specificity. It’s whether non-intelligent systems can do so without the input of intelligence from an intelligent agent. ID appears to say no, but I don’t see that they have made the logical or evidential case.  That argument rests on fundamental principles, and has little or nothing to do with the specific case of whether we can (or ever will) solve the mystery of OOL through natural or supernatural processes.

      Jon Tandy

       

       
       
       

  • Ide Trotter

    Thanks Jon,
     
    Forgive me but I still think you and Randy are hung up on the issue of definitions that it really isn’t necessary to resolve. Perhaps it would help me to understand what you are concerned about to see your answer to this question.  When the Venter team produced their watermark and incorporated it in an available cell what was the change, if any, in the way the information content should be characterized? 
     
    I hope we can agree that “abstract” specificity clearly increased.  To tell whether or not “functional” CSI also increased would, I think, require comparison of the competitive outcome of cultures starting with equal quantities of the Venter and original cells.  If the ratios remained the same functionality would appear unchanged. Should the ratios change one would have been shown functionally different.  How would you then characterize the information change?  No matter how you would characterize it what I feel CSI folks argue is that another characterization for that quality of information leading to different functionality is required. In what way would your characterization be differentiated from “specified information?”
     
    In conclusion let me address your feeling that, “The question is not whether humans can generate information, complexity, or specificity. It’s whether non-intelligent systems can do so without the input of intelligence from an intelligent agent.”  To get to the core of the issue I would rephrase the second to:  It’s whether random processes can do so without the input of intelligence from an intelligent agent.
     
    Why do I say that?  Because, as in the case you postulate “I phrased the argument on the assumption that it was a random or pseudo-random generator. Even if only 1% of the instances of a repeating sequence contain a slight variation, it is no longer a uniformly repeating sequence, and the complexity does increase.”
     
    Addressing the two following points should help.
     
    1)      You both introduce random processes and seem to agree random processes might increase complexity.  Apparently you aren’t arguing for an increase in specificity however you wish to define it.
    2)       Until a random source has been identified that can increase the initial post bang information described by particles and forces in such a way that makes biological activity possible there will be no biological activity and your assertion that biological systems might be able to do this is moot. 
     
    Let me repeat, this is not an issue of OOL. Producing one simple protein is not producing life or even biological activity. A single protein requires an ensemble of associated proteins to become truly active.  No clarification of definitions should be required to address this problem.
     
    Given the current state of knowledge I hope you will eventually agree that inference to the best explanation points to intelligent agency as the most probable source of whatever it is that I like to call increased CSI.
     
    How are we doing?
     
    Ide

    • Charles Austerberry

      Dear Ide:

      You wrote: “When the Venter team produced their watermark and incorporated it in an available cell what was the change, if any, in the way the information content should be characterized? I hope we can agree that “abstract” specificity clearly increased.”
      Certainly I would agree.  Abstract specificity was introduced where previously there had been none at all.
      You wrote: “To tell whether or not “functional” CSI also increased would, I think, require comparison of the competitive outcome of cultures starting with equal quantities of the Venter and original cells.  If the ratios remained the same functionality would appear unchanged. Should the ratios change one would have been shown functionally different.”
      Again I would agree. Note also that any change in functionality would be an unintended by-product of the watermark.  Any negative or positive effect of the watermark on the functionality of the cell would be random with respect to the abstract information (names of the scientists) specified by the watermark.
      “Until a random source has been identified that can increase the initial post bang information described by particles and forces in such a way that makes biological activity possible there will be no biological activity and your assertion that biological systems might be able to do this is moot.”
      I understand, I think, what you are saying here, Ide:  whether CSI could naturally increase once biological activity got started is (to you) irrelevant to the question of how biological activity got started in the first place, and the latter problem (to you) most clearly leads to the inference of design.  Fair enough.  I think Randy’s original point was that Meyer claimed that CSI (functional as well as abstract) cannot increase without intelligent design, at any time under any circumstances.  Definitions become important in trying to assess whether or not Meyer’s assertion about functional information is correct.
      Meyer may be right about abstract information, but we disagree as to whether or not Meyer is right about functional information, especially once biological activity is present.  Clear definitions are needed if we are to clarify the basis of our disagreement on this problem.  Even if we limit the discussion to CSI increases prior to the first biological activity emerging, I’m not sure the need for definitions goes away.  But Jon can make that case better than I.
      Cheers!
      Chuck

    • Jon Tandy

      Ide,

      Chuck answered the question about Venter’s work better than I could, since I’m not familiar enough with it. However, I would agree with your statements about both abstract and functional specificity in the modifications they made to the cell. Their modifications, if I understand it right, was putting human-recognizable patterns into the cell, but which presumably don’t affect the functioning of the cell. Thus they have created information, probably increased complexity, and produced abstract specificity. If they had modified the cells, for instance, to correct a gene responsible for producing a disease, it would be new information with functional significance assuming it was able to survive and perpetuate itself.

      I’ll address your last two points, then come back to some further comments.

      1. By saying “you both”, I assume you mean Randy and myself. Without trying to speak for Randy, I believe we are most definitely arguing not only that random processes can increase information and complexity, but also potentially increase the specificity of that information. I’ll try to elaborate on that assertion further.

      2. Again, you are returning to the origin of the universe, as if you had defined what you meant by an “increase in post bang information.” I have tried to address this in past posts, but you have dodged the implications of this problem of definitions. So I’ll approach it a different way. Based on your statements about the initial forces in the universe for the first 9 billion years, you have (it appears to me) answered part of your question and admitted that random interactions of particles and forces CAN increase in such a way that makes biological activity possible.

      You acknowledge, if I’m not mistaken, essentially the standard cosmological model, that the universe progressed from a point of an energy density and disorganized particles, through a series of stages including the formation of many of the heavier elements. These elements, in particular carbon, were necessary for biological life to form. They were not present in the initial environment of the early universe, and they formed through the interaction of essentially random interactions of particles and forces. At least I haven’t seen you significantly contradict this standard hypothesis. Your criticism regards the step that comes after all this development of heavier elements, in particular how these heavier elements formed the first living biological entities. Everyone acknowledges that we haven’t solved the question of how that happened.

      Even though we haven’t solved this one problem, it is *NOT* a moot point to investigate whether biological systems (once developed) can further increase CSI. There is no correlation between the two questions. If random biological and non-biological forces can increase CSI, it still doesn’t prove that life itself developed spontaneously (naturally). I’ll certainly grant that possibility, as I have in past posts. But if random processes can increase CSI, then ID’s claim is contradicted, and it leaves open the possibility that biological life could have developed in a similar manner. It’s simply an open question at this point.

       

      Now, let me answer how a random process can generate information, complexity, and specificity. The classic example is a million monkeys typing on typewriters, beating the keys at random. Given enough time, it is possible that they could produce the works of Shakespeare. Specificity is given in this example by an abstract assignment of meaning, by someone looking at the typewritten sheets and saying, “Aha, they finally produced the works of Shakespeare. It only took 20 million years to complete, but here is a set of manuscript pages with all the right letters in the right places.” Both information and complexity increase naturally as a result of the random processes. Abstract complexity is potentially present, but is only assigned when an intelligent agent looks at the information and recognizes it. We would need a different analogy to illustrate the functional specificity. In the case of the E.coli that I gave earlier, the functional specificity happens naturally when the bacteria finds a way to survive and reproduce more efficiently through random mutations.

      I don’t understand your ambivalence toward more clear definitions of CSI. Let’s say I wanted to analyze the information coming out of my password breaking algorithm, which I illustrated in a previous post. If I can’t even define what is information or what I mean by complex information, much less what it means for that to be specified, then how can I talk about it with any technical precision? More importantly, by clarifying definitions, I’m trying to address a problem of shifting frames of reference that seems to keep occurring in these conversations. Let me try to illustrate the problem using the information technology example.

      My random sequence generator produces a growing set of complex information. The information is the bits of data that come out on the wire. But let’s say, for instance, that you come along and argue that no, there is no complexity being generated, because all the complexity (and specificity) is already built into the software algorithm written in the program, which was designed by an intelligent computer programmer. It was already sufficiently complex to produce all of what it generates, so no increase occurs. What you would have done is shift from one set of information and complexity (varying sequences of data bits on a wire), to a different sort of information (program statements written in a certain symbolic language) with a different sort of complexity (i.e. simple program versus a complicated program).

      But this is a false dichotomy. Not only can the data be analyzed for complexity and specificity without reference to the original program, there is no correlation between the information complexity in the sequence and in the program that produced it. I can write an extremely simple computer program, maybe just a few lines of code, that can generate a very complex set of information (which most definitely is *not* present in any meaningful way in the original algorithm). So the simplicity or complexity of the program (or fractal algorithm, or initial DNA in pre-B antibody cells) is a different sort of information and complexity from what is produced by it.

      Jon Tandy

       

       
       

  • Ide Trotter

    Thanks, Chuck, it seems like we are pretty much together.  Now let me return to Jon.
     
    Jon, I thought I had it made it abundantly clear that I am on board with the currently generally accepted cosmology and even Lyman Pages’s latest assessment of inflation, the CMBR and the sixth harmonic fitting of the CMBR temperature distributions initiated by quantum fluctuations in the pre last scattering particle soup.
     
    As to your assertion that you are “most definitely arguing not only that random processes can increase information and complexity, but also potentially increase the specificity of that information” IMHO you need to do more work on that to convince me and I expect I may have some company in the scepticism I express.   I realize you and Randy think that in a vast soup of CSI more CSI can be produced. Let’s just stipulate our disagreement on that. However I hope you can see my point that if not moot it is at least an issue that doesn’t come into play untill sufficient code to produce first life somehow comes into being.
     
    I still assert that there remains a great mystery as to how random processes might start to produce code and and that the start I’m looking for is still a long way from OOL. Pre Venter it was just an observation from experience that intelligent agency is the only know source of code.  Randy even made the assertion that it had it not been shown that intelligent agency could produce biologically functional code but Venter has now setled that argument.
     
    As to the balance of your arguments I must first express some surprise at the resurrection of the typing monkey scenario.  I had thought that died and was buried long ago.  As to your random number generator I see it as merely a varient of the fractal in that information and complexity are generated but specificity is not increased beyond that in the generator algorithm.
     
    Perhaps I would find your assertion that, “Not only can the data be analyzed for complexity and specificity without reference to the original program, there is no correlation between the information complexity in the sequence and in the program that produced it.” if you would give me an example of how the initial and increased specificity, however you define it, is determined.
     
    We’re making progress.
     
    Ide

 

February 2010
M T W T F S S
« Jan   Mar »
1234567
891011121314
15161718192021
22232425262728

Email Notification for Posts