Complex Specified Information Without an Intelligent Source

Meyer claims that specified complex information can only arise from an intelligent source, justifying that claim by citing a series of examples. One of those examples is computer code. In my previous post, I suggested that this was not an adequate example because of fundamental differences between computer code and DNA information. An obvious question is whether there is an example of specified complex information that is not derived from an intelligent source but solely from physical or chemical functionality. In this post I would like to offer just such an example.

The magnificent example of antibodies was presented by Dr. Craig Story in the December 2009 issue of Perspectives on Science and Christian Faith, Vol. 61, No. 4, p.221. (if you aren’t a member or don’t have a subscription, copies are available from the ASA office for $10 plus shipping and handling; contact asa@asa3.org.) In his article, Craig explains how the immune system works, focusing on the importance of the inherent randomness in the process. In this post, I would like to offer a physicist’s interpretation of his paper, with a focus on the information content. Craig has graciously reviewed these comments and corrected my errors in biology.

Stem cells in our bone marrow continuously produce a population of pre-B cells, so called because they are precursors to B cells, which manufacture antibodies when mature. These pre-B cells are all identical and have the same antibody gene DNA. This population therefore has a relatively low information content. All the complexity is within the cell and there is no diversity in the population of cells. As the pre-B cell population prepares to moves into the body, the cells undergo a transition into B cells. In the process, key segments of DNA in each cell are rearranged randomly to form a unique and novel DNA sequence. The process is described in detail in Craig’s paper. It is a constrained process so that the resulting antibody protein is always a particular folded configuration that may have affinity to an antigen, but the gene segments are randomly rearranged and joined to alter the magnitude of the affinity. The result is a population of B cells, each one of which is different in terms of its antibody DNA. This means that we have a transformation of a low information population of pre-B cells to a high information population of B cells, with reference to their antigen-binding abilities. The complexity has increased dramatically but we do not yet have specificity.

As a B cell moves through the body, it may or may not encounter an antigen with which it has affinity. If it does not, the B cell dies and that particular configuration no longer exists in the body. However, if an antigen appears with which a B cell has some degree of affinity, the B cell will attach to the antigen. In this case, that B cell will reproduce through cell division to create clones of itself. This process occurs throughout the population of B cells with the result that only B cells with some affinity to the environment of antigens survive. This is a basic level of specificity.

There is another level of specificity that Craig describes. A first-responder B cell usually will have a relatively small degree of affinity to an antigen. As this cell reproduces itself, an enzyme enhances the mutation rate of only the portion of the antibody genes that determines the affinity. In some cases, mutation rates can reach as much as one nucleotide per cell division. This means that the subpopulation of this particular B cell grows with a dynamic diversity of various degrees of affinity to that antigen. The cells with the strongest degree of affinity will preferentially attach to the antigens, leaving those with weaker affinity without antigens and therefore a death sentence. Over time, this subpopulation will be predominantly one with strong affinity to this particular antigen. This, in a nutshell, is why vaccines work.

In the bigger picture, this example shows how a homogeneous population of pre-B cells is transformed to a dynamically diverse population of B cells, with a tremendous increase in information content. This complex information then becomes highly specified by fine-tuning to match the antigens to which they are presented. The result is a high degree of specificity and complexity with no involvement of an intelligent designer as an immediate cause. This does not, of course, preclude the sustaining involvement of an Intelligent Designer at a metaphysical level.

Craig points out the critical role of randomness as a key characteristic of the cellular processes involved in the immune system. The random process of gene rearrangement is necessary to ensure a sufficiently broad range of binding specificities, such that some of them are almost sure to bind to one part of each pathogen. His example also illustrates clearly how highly complex and highly specified information is derived directly from a population of relatively low-information cells. Hence, the argument that Meyer makes that all complex specified information comes from an intelligent source does not withstand scrutiny.

The antibody example is a beautiful illustration of the basic processes of evolution. It begins with the common ancestry of the stem cells that produce an ancestral population of pre-B cells that are essentially identical. Descent with random variability occurs in the generation of the B cells, which are all unique with respect to their antibody gene DNA. Natural selection describes the way in which B cells that do not bind to an antigen will die while those that do bind to an antigen proceed to reproduce clones. The random variability of the dynamically diverse population of antibodies ensures the formation, within a short period of time, of antibodies with affinity to virtually any antigen. The subsequent way in which those B cells acquire stronger affinity to that antigen is a type of adaptation. Darwin suggested that these basic processes, operating over a long period of time, could account for the origin of species. Little did he suspect that these very processes are active continuously in our bodies on a relatively short time scale to provide a vital line of immunological defense.

240 comments to Complex Specified Information Without an Intelligent Source

  • Ide Trotter

    Jon,
     
     
    Let’s keep trying.
     
     
    First, I’m curious why are you asking, “what are you talking about in your calculation of total information in the universe at any point in time?”  To the best of my recollection I have never calculated nor raised questions about the “total information in the universe at any point in time.”   What I have said and what all cosmological modeling is based on remains, as I stated much, much earlier, “According to the current understanding of the origin of the universe, given six numbers, see Reese, and four forces, quantum behavior described by quantum mechanics and gravitational interactions that seem to conform to general relativity all possible material outcomes can be developed.”
     
     
    I realize Randy thinks, “CSI doesn’t make sense in the universe or in cosmology.”  Perhaps you agree with him.  OK, lets not call it CSI.  The fact remains that whatever it is satisfactorily models the physical universe for nine billion years.  And whatever you may choose to call the material particles of the universe that are the outworking of the cosmological process I have simply asked, “What random process obeying the laws of physics and chemistry do you suggest was capable of producing the first 10 coded letters of any biologically active molecule you choose to select?” 
     
     
    Randy seemed to agree that whatever you call the original set and the resultant particles they can’t explain the production of code.  He seems to think they can’t since he wrote, “I certainly think there is a “previously undiscovered property of nature” that emerges from higher complexity without any new force or law or indeterminate intelligent agent.  All I’ve asked Randy is, “What is the previously undiscovered property of nature?”
     
     
    You state it “seems to me that the original undifferentiated particles and energy states have become vastly more specified into functional units and operational systems, and thus this represents an increase in both specificity and complexity.”  Fine. However you’d like to describe it how do you suggest they were capable of producing the first 10 coded letters of any biologically active molecule you choose to select? 
     
     
    Feel free to pick any definitions you like.  Forgive me but arguing about definitions seems like a diversion from the core issue. You pick any definitions you like and define them however you like.  Just tell me how you think the original undifferentiated particles and energy states did the job.  I trust you agree that if one were to show how to produce “the first 10 coded letters of any biologically active molecule” CSI would have been produced and satisfied the objective set out for this thread.  CSI with out an intelligent source.
     
     
    We know an intelligent agent can do it.  I believe you have both acknowledged that. The alternatives you have offered so far are “some undiscovered property of nature” or something that “may ultimately be derivable.”  Is it not possible to agree, based on the lengthy discussion so far and the alternatives you have offered that, given the knowledge of the day, agency has an arguably higher probability of being the ultimate explanation?
     
     
    I look forward to your response.
     
     
    Ide
     
     
     

  • Randy Isaac

    Ide, you wrote “We know an intelligent agent can do it.  I believe you have both acknowledged that. The alternatives you have offered so far are “some undiscovered property of nature” or something that “may ultimately be derivable.”  Is it not possible to agree, based on the lengthy discussion so far and the alternatives you have offered that, given the knowledge of the day, agency has an arguably higher probability of being the ultimate explanation?”

    No, neither Jon or I acknowledged that. Quite the opposite. I have not seen any evidence of any kind that any intelligent agent was able to generate the requisite biomolecules. In contrast, I have said repeatedly that reproduction with variation and selection has been shown to generate increasingly complex living systems. It is quite obvious to me that reproduction with variation and selection must be inferred as the best explanation. An unknown agent with no demonstrated existence or ability to generate living systems isn’t even a candidate, let alone having the higher probability.

    As for your challenge of generating the first ten amino acid sequence, no, I continue to think that’s a false path and quite irrelevant. Read more than one Shapiro article about the origin of life research and you’ll see a much richer pattern of possibilities. Such a sequence doesn’t exist by itself and never has.

    The key point of Meyer’s book was to declare that the so-called DNA enigma could only be solved by an intelligent agent. However, it seems to me that only reproduction with variation and selection has ever been shown to be a possible explanation for the emergence of a biological system with DNA. There’s no reason to think that such capability could not have arisen out of the complexity of this earth’s environment given the forces of nature that we now know. We haven’t figured out the details and that’s what OOL research is all about.

     

    Randy

  • Ide Trotter

    Randy, I’ve not disputed your many statements that, “I have not seen any evidence of any kind that any intelligent agent was able to generate the requisite biomolecules.”  While I question your argument that CSI is easily produced when a sea of biologically active CSI exists I can’t refute it either.  But I don’t think you can dispute that at some point in the latter half of the existence of the universe there was no biological activity.  Now there is biological activity. That chasm has to be crossed somehow.  It requires either one Darwinian step at a time or, as you prefer, some “previously undiscovered property of nature.”  
     
    Regarding, “As for your challenge of generating the first ten amino acid sequence, no, I continue to think that’s a false path and quite irrelevant. ……. Such a sequence doesn’t exist by itself and never has.”  How can you support that argument?  First, why isn’t that the equivalent of what Craig Venter’s team just did to mark their hijacked reproducing cells?  He created a specified sequence of nucleotides and you can’t deny he is an intelligent agent.  Second, I contend the one step at a time path should only be argued irrelevant by someone who has demonstrated a “previously undiscovered property of nature” leading to a “miraculous” multiple nucleotide coded condensation, not by someone who just wishes for one.  Until then all options remain on the table.
     
    Now, think about the circularity of your assertion “it seems to me that only reproduction with variation and selection has ever been shown to be a possible explanation for the emergence of a biological system with DNA.”  I can’t see how “reproduction” can be an argument for “emergence.”
     
    I still argue, “We know an intelligent agent can do it.”  And we know the name of it least one.  Thank you Craig Venter.
     
    Carry on.
     
    Ide
     

    • Jon Tandy

      Ide,

      I think we are getting to the crux of the issue. The only reason we are talking about CSI in the “post-bang universe” is because you brought it up, and insisted that this was critical for understanding later (proposed) increases in CSI. So to say that you have “simply asked” about the ability (or inability) of laws of chemistry and physics to produce biomolecules, that’s not completely accurate.

      I have understood from the beginning that that was the target of your argument, but in the process you have made at least three claims. The first is, that the amount of CSI in the early universe was relatively small (including several laws of physics, etc.), and that amount of CSI was sufficient to produce the various structures in the known universe prior to biological life. This leads to the second claim, that there had to be an “injection” of CSI (presumably from outside the physical universe governed by those laws of physics) in order to have the first example of life develop. I think there was at least a third claim, that since there is a large pool of CSI now available to the antibody system, the apparent increase in complexity/specificity isn’t truly an increase, because it just rearranges existing CSI that had previously been injected and is already present in biotic systems. Is this a fair summary of your original positions?

      This is why I challenged you early on that you are asking us to solve the mystery of “origin of life” research before we can talk about the role and increase of information in the antibody example or any other system. I’m not suggesting that I know or have the answer to how laws of physics could develop into laws of chemistry, or how those physical principles might develop into new properties that we would classify as biological, or how those elements might “spontaneously” develop into what we now know as biotic life. That is the tenacious mystery of OOL. When Randy mentioned a “previously undiscovered property of nature”, he may have been referring to what OOL research is trying to discover. Whether they will ever be successful is a matter for debate and further investigation.

      Now in one of your two lasts posts, you have suggested, “OK, lets not call it CSI. The fact remains that whatever it is satisfactorily models the physical universe for nine billion years.” The phrase “whatever it is” simply illustrates my point, that your position in the first claim is unsubstantiated because you haven’t defined what “it” is. Specifically, what is the information, what is the complexity, and what is the specificity being considered? If there is an increase in any of them, then there would arguably be an increase in CSI.

      I’m quite happy to leave off discussion of the early universe, if you’re willing to drop the claim that CSI didn’t increase in the early universe. However, that is central to your second claim that the development of life required an “injection” of (or by) intelligence. Notwithstanding, in the process of responding to your claims as I’ve understood them, I’ve come around to making a counter-claim that CSI *did* increase in the early universe, based solely on those forces of nature that you brought up in the first place. Perhaps you would like to argue against the rational argument that I have tried to make in this case, by clarifying either my definitions or conclusions. Or you could ignore it, and we can move on to a different topic.

      However, since we’re involved in a discussion of information, complexity, and specificity, we can’t escape the requirement to clearly define what system is being considered, and how information is defined in the context of the system, and what it means for that information to be complex and specified. Only then can we have clear discussions about how that total information might increase over time, and/or become more complex, and/or become more specified (abstract or functional specification). Don’t you agree? I would like to have a discussion about CSI in the context of Shannon information, because I think that might be instructive to apply to other systems. But we’ll see how that goes.

      I am not claiming that an intelligent agent can’t make something like a biological molecule or system. Randy has made that statement, which I countered with a similar argument to your own. (What if Craig Venter’s work, or something like it in the future, is able to create something like DNA or living system?) The question however is, if there wasn’t any intelligence already present in the universe after “nine billion years” as you say, then where was the intelligence to create life in the first place? Even if intelligent agents today might learn how to copy or create living things, what intelligence was present to do it 4 billion years ago?

      This is where I must again distinguish my personal belief from what I see as rational scientific evidence and conclusion. My belief is that there *did* exist intelligence throughout the history of the universe, namely God. He did have the power to create the first living thing or anything else that He might have chosen to create de novo or supernaturally, or through secondary (”natural”) causes. I don’t rule out the possibility that God acted supernaturally rather than through secondary causes in this instance, or in any number of other instances that we might talk about. There are some who do rule it out, but I don’t share their confidence.

      The point of discussion in this thread is, does information theory rule out the ability of natural forces to increase CSI in a system? If so, then ID *might* be (or become) a strong scientific inference for the existence and action of God (even though ID claims not to be discussing God). If not, then it is possible that OOL research might eventually be successful in discovering those secondary causes through which God acted to create life. However, even that is an open question — if CSI can increase naturally in some systems, it still doesn’t prove that life originated spontaneously as a result of natural forces. Either way it might turn out, I don’t believe that ID has yet made its case that CSI can only increase due to the action of intelligence. And either way, God is still the creator.

      Jon Tandy

       

       
       

  • Ide Trotter

    Jon, thank you very much.  I find your points extremely helpful in moving this discussion along. Before going back to the detailed discussion I especially want to thank you for your final three paragraphs with which I am in almost total agreement.  My only quibble would be that I don’t see “does information theory rule out the ability of natural forces to increase CSI in a system” as the point.  I understand Dembski argues that is the case based on his development of a “Law of Conservation of CSI.” It seems highly plausible to me but lots of smart people don’t agree.  I believe this thread started out to look at it from the other side so to speak.  That is, setting aside whether or not CSI is conserved once it is in place, can CSI be increased without intelligent input, i.e. without an intelligent source?
     
    I appreciate your interest in better definitions.  While I acknowledge that agreed definitions would be helpful, definitions seem to be a matter of considerable confusion and disagreement. So I tried to find a point in the causal flow of nature between inanimate material and biological activity where we could address a very simple question.  What happens and why?
     
    Now let me return to your understanding of my “claims.”  We are together on the first.  We are close on the second.  But my point, rather than a claim, was that either an “injection” of CSI (presumably from outside the physical universe governed by those laws of physics) in order to have the first example of life develop was required or we have to find a natural path.  I believe this thread is addressing just that question. On the third I’m not exactly arguing that it just rearranges existing CSI but I can see why it might seem equivalent to that.  My perception is that the existing CSI exerts a selective function preserving effective random changes.  Randy would argue that increases CSI.  Maybe. Although I don’t think so I certainly can’t prove it doesn’t.
     
    I’m not sure that “discussion about CSI in the context of Shannon information” would add much.  As I see it CSI may or may not be contained in a string of Shannon information.  Even if so contained, the string of bits still remains Shannon information and follows the appropriate relationships.
     
    Now regarding your “counter-claim that CSI *did* increase in the early universe, based solely on those forces of nature” I don’t think we need to resolve that point to address my question. Whether CSI increase or not the universe reaches a point where biological function appears.  Without arguing all the way to origin of life I think it is a reasonable question to ask if the “natural CSI” at that point can be shown to produce a start of the biological code required.  Randy seems to argue for some unknown process to accomplish that by an all at once “condensation,” for lack of a better term, rather than the step-by-step process I am asking about.
     
    Since all of us in ASA are committed to the proposition that the Creator God of the universe is ultimately behind it all we can dispense with the ID agnosticism as to what source of intelligence might be in our minds.  It seems to me that the question we are wrestling with is whether or not a case can be made based on the evidence of nature that God had to act along the pathway of time or not.  If anyone can discover Randy’s “undiscovered property of nature” the answer will be no.  Until that prospect seems closer to hand I think the principle of inference to the best explanation should be operative. The only thing that we really know can produce code is intelligence.
     
    How are we doing?
     
    Ide

    • Jon Tandy

      Ide,

      I’m confused by your first point. My comment was on whether CSI can increase without the action of an intelligent agent, or whether ID shows that that can be ruled out. You restated this at the end of the first paragraph, so I’m confused where there difference is. The argument of “conservation of information” cuts both ways — it could be argued that conservation means it can’t increase without new input, but it could also mean that once in place CSI can’t decrease. Either of those points could be fruitful ground for challenging Dempski’s argument, but I don’t think we have really been discussing it much here.

      You say you’re not sure we need to resolve the question of whether CSI increased in the universe during the first 9 billion years. I was addressing what I felt was your claim originally, that it didn’t increase. But I think the point is important, if not central. The question still is, can CSI increase without input from intelligence? If if can in the early universe; if it can (arguably) in the antibody system and others; then it is *plausible* that it could have increased during the development of biotic life without the direct action of intelligence. By direct action I’m implying the primary, supernatural causation of a Creator. You’re right, the exact pathway and mechanism are not understood, otherwise there wouldn’t be a need for OOL research.

      Yes, it’s an important question of how life came to be in the first place. But it’s not the only question. There are lots of systems where complex, specified information might possibly be generated. Randy suggested one from the antibody system, but I’m sure eventually we might get to others. So I would argue that we don’t need to resolve OOL in order to talk about those others systems, with the qualifier that we carefully define what we mean by information, complexity, and specificity.

      On your third point, I would have to think about it more. I’m not sure I understand what is meant by rearranging versus exerting a “selective function preserving effective random changes”. I think we would have to return to Meyer’s book, the subject of this entire blog discussion, to see how that relates to the arguments that he is making regarding biological systems and related CSI content.

      Jon

       

       
       

      • Jon Tandy

        Ide,

        I think the issue of origin of life, mineral substrates, etc. may be interesting for future investigation.  But I think we have all agreed that OOL is an open mystery that hasn’t been solved yet, and it’s likely it will be some time before it ever will be (or perhaps it can’t be).  Therefore, I would like to take the discussion in a little different direction, but still with the goal of clarifying how we understand information, complexity, and specificity.  I’ll try to briefly explain to you why I suggested the framework of Shannon information may be helpful for clarification, and then try to relate the same principles to a couple of biological examples.

        So-called “Shannon information” is important because it was Shannon’s research that formed the basis for modern information theory.  It is information theory that ID is trying to co-opt to make its case regarding intelligent design, therefore this seems to be a good starting point for understanding.

        I want to consider a data transmission stream of 1’s and 0’s, produced by a random number generator and being transmitted in a digital communication medium.  The information here is the bits of data (1 or 0).  The total information (i.e. number of states in the system) increases continuously as long as the random generator keeps running.  The complexity of the sequence also increases continuously, assuming that the bits are a varying pattern (not all 1’s or all 0’s).  This is true whether you look at the sequence as a whole, or as an increasing number of chunks of information that vary in content.

        Specificity is so far up in the air, because I haven’t described what the system is or who is looking at it, or what it’s trying to accomplish.  Suffice it to say that if a human looks at the data stream and recognizes English words represented in ASCII for “Have a nice day”, that would be “abstract specificity” – assignment of meaning by an intelligent agent.  A French speaker might look at the same data stream and never see the pattern, so there would be no abstract assignment of meaning.  For the purpose of my illustration, I’m defining the specificity is a function that will be performed by the program achieving a correct data sequence (in the absence of anyone watching), such as hacking into the password on a target computer.  I would argue that this is functional specificity, because it has a functional success target, and the program “succeeds” by achieving a particular outcome.  Someone may debate this, because the whole system is designed by intelligent computer programmers, but this is where definitions get murky if we aren’t careful.

        I think, based on my understanding of your past statements about things such as fractal algorithms, that you will argue there is a lot of CSI present in this system before the first bit ever gets transmitted, and therefore the total amount of information in the system (and/or its complexity, and/or specificity) hasn’t really increased.  If I’m correct about your position, I think this effectively nullifies the value of information theory, for which Claude Shannon is credited as being the father of both the theory and the information technology that has grown out of it. The whole point of information theory in this context is that data streams can contain more or less information and complexity, simply as a consequence of the length of the sequence.  As always, if anyone can inform me better about this field or my conclusions, I am open to correction.

        My point here is, the amount of information in the data communication stream and its complexity *do* increase continuously without the aid of intelligence inserting any CSI into the system during operation.  This can be calculated by looking at the bits of information alone, independent of and ignoring how the bits were produced, how complex or simple is the computer program, or whether they are produced in a deterministic fashion (e.g. pre-programmed lookup table) or in a truly random or pseudo-random manner.  All of those things must simply be ignored, because they don’t matter to the calculation of information “on the wire”, according to information theory as I understand it is practiced in this field.  To bring in a discussion of the complexity or simplicity of the generating program, or the design features of the semiconductor in the silicon wafer of the chip upon which the program is running, etc., is to completely miss the point of the information under consideration – it is simply about bits of data in a communication system.

        So in summary; in this example,
          Information = bits of data, where total amount of information increases as they are produced
          Complexity = how easy or hard it is to represent the data set (this might be thought of as related to compressibility)
          Complexity increases due to the randomization algorithm, but might be considered to increase to an observer even if it were a pre-programmed set of data being output
          Specificity = ability to succeed at hacking into a password in the target system
         
         
        What I want to do next is to relate this system to two biological examples, where I believe all the same elements exist, and the conclusion is effectively the same.  Information and complexity in the system do increase naturally, and functional specificity is gained, without any insertion into the system by an intelligent agent. 
         
        The first example is the antibody example that Randy has expounded a number of times, so I won’t repeat it here.  My apologies to the biologists for taking liberties in simplifying it down to a couple of basic principles.
         
        Summary of antibody system:
          Information = number of antibody cells, with various configurations that are able to bind to antigens.  Information increases continuously as long as the body produces pre-B cells that can be differentiated into antibody cells (with or without any variation between them)
          Complexity = the diversity of variety in the antibodies, which increases due to random mutations (based on environmental or other factors)
          Specificity = ability to bind to an antigen, and thus survive to produce more of their kind, with the positive result of producing immunity in the body
         
        It seems that the antibody system does have the ability to increase information, complexity and specificity without the direct input of an intelligence creating or manipulating the cells to adapt to particular antigens.
         
         
         
        The second example I want to consider is Lenski’s research on E.coli.  I won’t discuss it in detail, but will just summarize the research here.  Lenski allowed several lines of E.coli bacteria to grow for thousands of generations, and captured samples to track the changes in DNA over time, looking for examples of adaptation and natural selection.  Of course mutations occurred, changing the DNA sequence of the bacteria, which affected the rate of their growth.  In one case, the bacteria evolved a new feature of being able to feed on citrate rather than glucose.  This represents a kind of functional specificity, where a new function developed that allowed the bacteria to survive in a new environment that it wasn’t originally able to survive in.
         
        See the article here: http://scienceblogs.com/loom/2008/06/02/a_new_step_in_evolution.php
         
        This is another example where I believe that if there is any analogy between information systems and biological systems (as ID claims there is), there are some parallels with classic information theory.  Here I am considering the information to be the DNA sequences of the population of bacteria.  As more bacteria reproduce, more DNA is created by natural processes, with variations coming in due to natural mutation.
         
        Summary of E.coli example:
          Information = information consists of sequences of base pairs in the DNA sequence of E.coli bacteria.  Information increases continuously as long as the E.coli continues to reproduce.
          Complexity = represented by variations in the DNA sequence among the cells.  If all bacteria were to reproduce with identical DNA, then the total information would increase, but the DNA complexity over the population wouldn’t increase. Since there are variations, then the complexity of the data set increases.
          Specificity = functional ability to reproduce more quickly, or to survive and adapt successfully in non-ideal environment that was foreign to the original population
         
        It seems from the research that the bacteria can acquire new function and complexity in the absence of any outside input from intelligence, which directly results in the cells’ ability to adapt and prosper in a new environment.  In other words, an increase in CSI.
         
         
        In both cases, someone could argue that it’s not done without intelligence, but that God is providentially directing the mutation and adaptation of these cells in order to produce interesting and useful results.  This might be true in the minds of those holding one view of providence (i.e. that the universe would immediately roll up and disappear if God stopped interacting with it).  But without any evidence of the cells’ inability to act as independent physical entities versus deriving 100% of their ability to act from the direct but hidden action of a Divine Manipulator, it’s a matter of faith and not a matter that science can speak to.  Even if it were true, how can we experimentally bar God from interacting with the biology experiment in order to provide a control group to differentiate the results?
         
        Jon Tandy
         
         
         

  • Charles Austerberry

    Good work, Ide, Jon, and Randy.  Not much I can add at this point, except the following reference, which might be a good review of recent OOL research:
    http://arjournals.annualreviews.org/doi/abs/10.1146/annurev.biophys.050708.133753

  • Charles Austerberry

    Another recent review from Szostak’s group:
    http://www.ncbi.nlm.nih.gov/pubmed/20484387

  • Ide Trotter

    Thanks Chuck,
     
    The first article sounds like it is chipping away at some of the tough pre code producing challenges.  I’ve seen similar claims elsewhere that didn’t seem to get very far.  Unfortunately I’m not a subscriber.  Have they published any of their work on “the emergence of homochirality; the concentration and purification of prebiotic building blocks; and the ability of the first cells to assemble” somewhere that I might access it?  You will appreciate that it is interesting to me that they jump over the issue of producing code.  That’s one of the reasons I think it is so important keep our eyes on that prize. 
     
    The second seems familiar but interesting although less central.  I feel certain that this sort of research will not be impeded if there is ever a concession that, for now, intelligent agency looks like it should be up for consideration.
     
    Ide
     

    • Charles Austerberry

      Jack Szostak co-authored a popular-level article in the Sept. 2009 issue of Scientific American, but it doesn’t deal with the code problem very much either.  David Bartel’s lab has done some interesting work on how random RNAs could gradually evolve the ability to act as RNA replicases and thus code for function, but I haven’t found a free-access source of that work yet.  I’ll keep looking.
      Cheers!
      Chuck

  • Charles Austerberry

    Here are a couple of articles cited by the Annu. Review of Biophysics article:

    RNA-Catalyzed RNA Polymerization: Accurate and General RNA-Templated Primer Extension Wendy K. Johnston, Peter J. Unrau, Michael S. Lawrence, Margaret E. Glasner, and David P. Bartel (18 May 2001)
    Science 292 (5520), 1319. [DOI: 10.1126/science.1060786]

    Self-Sustained Replication of an RNA Enzyme
    Tracey A. Lincoln and Gerald F. Joyce (27 February 2009)
    Science 323 (5918), 1229. [DOI: 10.1126/science.1167856]
    Two ribozymes synthesize each other from oligonucleotide substrates to give a self-replicating system.
     

  • Charles Austerberry

    Here’s an article that should be freely accessible:
    http://rnajournal.cshlp.org/content/15/5/743

  • Ide Trotter

    Chuck,
     
    Thanks for all the information.  I had seen the SA article but will review it again.  The next set seems to assume RNA so I doubt if they help. Couldn’t get beyond the abstract on your last but in view of it’s coverage of “how a template-dependent RNA polymerase ribozyme emerged from short RNA oligomers obtained by random polymerization on mineral surfaces” it is working considerably downstream of the prior challenges I’m hoping to see addressed with a minimum of environmental baggage. 
     
    What we need is an article that shows how mineral surfaces might selectively catlyze/ synthesize levo-rotary, short RNA oligomers.  I think solving that problem, or finding an alternative path that worked, would take us to the brink of the code issue.  What do you think?
     
    Ide

     

     

     

     

 

February 2010
M T W T F S S
« Jan   Mar »
1234567
891011121314
15161718192021
22232425262728

Email Notification for Posts