Information, Intelligence, and the Origins of Life

by Randy Isaac
The term “information” has a connotation of knowledge in the midst of ignorance, an order that arises amid disorder. Information exists everywhere around us, and we spend our lives acquiring, storing, transmitting, and processing it. Yet it is hard for us to define or describe it, in part because the word can be used in so many different ways. In this article, four main categories of usage of the word “information” are explored, paying specific attention to its relationship to intelligence. Thermo- dynamics includes information on all possible physical microstates; capacity of information refers to the maximum number of physical states possible in a system corresponding to pre-established conventions; syntax refers to the particular physical state of that system at a point in time; semantics are the meaning, function, or significance of that physical state. Living systems, in particular, are complex information systems. A look at how living cells process information provides some clues, but not yet a solution, to the mystery of the origins of life.

PSCF 63, no. 4 (2011): 219–30

11 comments to Information, Intelligence, and the Origins of Life

  • David Roemer

    The human mind is structured like the scientific method. At the bottom is observing, which requires paying attention. At the level of intelligence, humans try to understand why things happen and the relationship between things. Extremely intelligent people invent new insights and theories. At the level of reflective judgment, humans marshal the evidence and decide whether a theory is true. Reflective judgment requires being rational. The highest level is deciding what to do with our bodies. This requires being responsible.
     
    Humans have a drive to know and understand everything. When animals have nothing to do they go to sleep, but humans will ask questions about doing, knowing, understanding, and observing itself: What is the relationship between myself and my body? What are images and other mental beings? What is the conscious knowledge of humans as opposed to the sense knowledge of animals?
     
    Liberal Christians, atheists, agnostics, and humanists have a blind spot about these types of questions and fail at the level of intelligence, not rationality. The following quote is from a textbook used by 65% of biology majors in the U.S.:
     
    “And certain properties of the human brain distinguish our species from all other animals. The human brain is, after all, the only known collection of matter that tries to understand itself. To most biologists, the brain and the mind are one and the same; understand how the brain is organized and how it works, and we’ll understand such mindful functions as abstract thought and feelings. Some philosophers are less comfortable with this mechanistic view of mind, finding Descartes’ concept of a mind-body duality more attractive.” (Biology, Neil Campbell, 4th edition, p. 776 )
     
    Campbell only grasps two solutions to the mind-body problem: materialism and dualism. There is no evidence supporting these two solutions. He doesn’t understand the solution judged to be true by rational people: The mind-body problem is a mystery, and humans are embodied spirits.
     
    We know God exists because the success of the scientific method means the universe is intelligible. Since humans are finite beings, an infinite being must exist because finite beings need a cause. The origin of life, the increase in the complexity of life, and the origin of the universe are not mysteries. Calling these scientific questions mysteries implies that you don’t know a mystery when you see one.
     

    • Randy Isaac

      Dave,
        Thanks for your comment, but I’m not really sure why you think these aren’t mysteries. Maybe I’m just using a simplistic meaning of the word “mystery.” To me, it is just something we haven’t yet discovered. In that sense, these are certainly mysteries. I agree that recognizing a mystery is the first step to solving it.
        Randy

  • Richard Blinne

    Great job!  The only thing I would add would be to discuss the role of noise with respect to channel capacity. A key question  does your communication channel have enough capacity given a specific amount of noise? Issues that have bearing on the debate include:

    1. An appreciation that information is both deterministic (signal) and random (noise) and which is which is something imposed from the outside and not intrinsic to the system.

    2. Using a noisy channel as an analogy for biological reproduction/evolution. In other words does the reproductive system have sufficient capacity in the presence of DNA mutation?  (Watts’ article answers this with a yes.) Fuz Rana’s book, Cell’s Design, claims that the cell has an identical error correcting code as computers do. On page 159, he claims that the DNA base pairs form a parity code.  Being a computer expert I can state emphatically that this statement is in error. Nevertheless, the analogy of an error correcting code is still useful. So,  what do these “error correcting codes” look like? Off the top of my head there is at least one a priori and one a posteriori one. The a priori one is having multiple copies of a gene. This is extremely inefficient and uses way more extra “bits” than a Hamming or parity code. Furthermore, it’s a crude kind of “fault tolerance” when compared to the computer equivalent.  The a posteriori code is negative natural selection. This is not unlike a checksum where a message is thrown away and resent if it fails. But, unlike a checksum it only throws away those messages that radically affect survivability. Genetic drift is when the checksum is not applied. Positive selection is the anti-checksum which picks up a corrupted message over and against a good copy. In other words, natural selection is a necessary component for the “error correction” in the cell to work.

    • Randy Isaac

      Great comment, Rich. Thank you. Yes, noise is a very interesting part of this issue. One of Shannon’s key contributions was to provide a way of dealing with noise.
      1. Yes, noise is a critical feature of DNA information. On one hand, there’s enough redundancy and non-critical portions of DNA to provide a buffer against noise. On the other, noise (interpreted as random variation in sequence) is an important component of the source of new information.
      2. Thanks for your perspective as an expert in error correction. The biological error correcting codes are all based on mismatches in chemical bonding of one type or another. Not a single one is of the type “this sequence should be ACT and not ATT…” even though all chemical bonds are “proper.” These are physical errors while computer and communications error correcting is based on abstract relationships.
      Randy

  • William Powers

    Randy:
    We’ve spoken about this before, but it’s been a while; and it seems that you made some progress in how you address the issue of what you call “physical symbolism.”  There are a number of problems here.  But what you and others are trying to do is to distinguish two types of information.  In making this distinction you have defined information as having properties of capacity, syntax, and semantics. Capacity is a property of the medium of information, syntax the rules of the combination of symbols, and semantics the “meaning” of those symbols.  The medium is always physical,  Symbols point from a physical state to something else.  The something else can either be another physical state or to something “abstract” (whatever that exactly is).
    According to your notion of “physical symbolism” the connection between the “written symbol” and the “message” or “meaning” is physically necessary, unlike “abstract symbolism” where the connection appears to be completely physically arbitrary.  So according to this view (one BTW supported by Haidt) an apple falling from a tree and hitting the ground “means” that an apple tree is nearby.  Or to use an example that I frequently used in our previous discussion, the meaning of a “…” is an “S” in a Morse Code decoding machine.  They are in this machine necessarily related.
    So in this view of “physical” information what increases information?  Suppose that are Morse Code decoder by some freak accident (say someone dropped it) started associating “—” with some character not in the English alphabet.  Would that be an increase of information?  Since all that this notion of information requires is that physical causality relates two events (you don’t even specify that the two events are reliably related, or in any sense uniquely related), then any two causally related events that perhaps have not occurred before is new information.  The eroding of a sand dune is new information.  You can’t introduce the notion into this kind of information of types since that is an abstract entity.  So you couldn’t, it seems, say that the relationship between a hydrogenic electron decay and an alpha line provides information because these speak of types and not particulars.  Is your notion of “physical symbolism” ultimately founded on unique particulars.  After all, it is humans that interpret types as even existing.  In that sense, if you hold that, for example, a hydrogenic alpha line is the symbol that means a hydrogenic electron has decayed then you require abstractions.  It seems that in order for “physical symbolism” to not decay into the utter nonsense that every thing that happens is a unique message (meaning it seems no information at all), that you would have to use the notion of types.  And if this is so, then all information is abstract.  It seems to me that until and unless you and others can present a clear notion of what information is, all talk of it increasing or decreasing is nonsense.  As I have said previously, Story’s description of antibodies, while very interesting, does not persuade me one iota that information is being increased.  In large part this is because he appears to neglect the information (whatever that exactly means) required to assemble the antibody system.  It is as if one were to examine a refrigerator and conclude that the Second Law of Thermodynamics must be false.  It is for this very same reason that to point out that cellular genetics is a biochemical machine and that therefore the related states are physically necessarily related is, in part, irrelevant.  A Morse Code Decoder is a machine.  Is the entire informational content of the Morse Code machine defined by the necessarily physically related components of the machine.  I think not.  The same applies to cellular dynamics.  If humans are “physical” devices, then there is only “physical symbolism.”  But even given only “physical symbolism” we are not done.  What is required are the resources to construct the physical devices.  We can speak of this as informational resources.  If this makes sense then in considering whether information increases or decreases, one must consider the total informational content and not merely a subset.  Otherwise, we risk concluding that the Second Law is false.
    I hope this makes some sense.

    • Randy Isaac

      Bill, it’s good to hear from you again. I always enjoy your dialog. In response, let me share a few observations to see if they might help:
      Semantic information is not part of information theory (see the quote in my article from Shannon) and is not quantifiable with any precision. Thermodynamic, capacity, and syntax information are measured in bits and are real physical parameters. The closest one might get is to quantify the subset of syntax information that has a relevant semantic significance. For example, we could enumerate all the words of the English language that have a meaning. One can’t be very precise about the number of words in English dictionary, but conceptually there is a subset of all possible combinations of the alphabet which constitute an entry in a dictionary. But the meaning itself is not quantified.

      As for Craig Story’s article on antibodies, it should be clarified that the new information is a new sequence of DNA, and therefore new in the syntax category of information, not in the number of bits. That new information (sequence) did not exist in any previous form and was constituted by a random process. Through the B cell production process, the information in the DNA sequence was not merely duplicated but duplicated with variation to produce a new information state, which was then selected for optimum binding to an antigen. The net result is new information.

      In some of your sentences, you seem to conflate various categories of information. If you call into question the Second Law of Thermodynamics, then you must indeed consider the thermodynamic category and include all possible microstates. However, that’s not what we usually talk about in bioinformation or the information we use in any practical context. All cells and organisms use energy from the environment in the reproduction process and the Second Law is not violated in any way whether information or entropy of that cell  is increased or decreased.

      And then we come back to your favorite topic, physical symbolism. I’m glad you think we’re making progress. I agree, it is a hard one to articulate and I appreciate your helping me refine that attempt. I thought you gave an extraordinarily clear explanation of it. thank you! But then I started to get lost. I’m not sure what you meant by “types.” If by “types” you mean abstract categories into which humans place phenomena in order to discuss and understand them, then, yes, these constitute semantic information and are examples of abstract symbolism. An electron that is produced in a decay process doesn’t care about types. It just appears. Whether any of this is an “increase in information” is not particularly relevant.

      Let’s talk a little more about what it means to “increase” or “decrease” information. Thermodynamically, we can expect information to scale in the same way as energy or entropy. There are interesting papers in the literature debating how information and energy and entropy are related. Generally, a deterministic closed system will not have a net change in information, energy, or entropy.

      But typically we talk about information as capacity and syntax. In this context, information can be added like entropy and information can be erased which really means it is transferred to the environment into a non-usable form. A new syntax is really new information even without changing the number of bits. But why the focus on whether or not information increases? It really isn’t an issue outside the unsupported claim that information cannot be generated without an intelligent source.

      Finally, you said “It seems to me that until and unless you and others can present a clear notion of what information is, all talk of it increasing or decreasing is nonsense.” I was hoping I had presented at least a clearer notion of what information is. For thermodynamic, capacity, and syntax categories, there are clear ways of quantifying information. Information theorists build on Shannon’s seminal work in this area. The difficulty comes when we try to extend that to semantic information, as is done with the concept of complex specified information. Alas, that is not quantifiable in the same way and I agree that all talk of it increasing or decreasing or being conserved is nonsense.

      Hope this helps,
      Randy

    • Randy Isaac

      P.S. I should add a word about physical vs abstract symbolism. The distinction is not to be found in whether or not there is a physical translator between the code and the message nor in the source of the resources for such a translator. Rather, it has to do with the criterion for the specificity or significance of the information. For example, in your example of the Morse code, a triple dot, “…”, could be mechanically translated into an “S” or someone could mechanically translate it into an “R”. How would one distinguish which is “correct?” In the case of Morse code, it is a correspondence to an abstract coding table. It isn’t physical symbolism even though there is a physical translator.
      In the case of DNA, codons are translated into biomolecules. Let’s assume they get translated to two different biomolecules. Which is “correct?” The criterion is a physical one–which ever enables the cell or organism to survive and reproduce. That is physical symbolism, not abstract.
      Does that help?
      Randy

  • William Powers

    Randy: It seems to me that your definition of physical syntactical information is problematic.  Unless I have this wrong, the greater the coherence of the information, the smaller the physical syntactic information.  Hence, when you take a random collection of rocks and make it into a Rolls Royce you have decreased the syntactical information since it now requires few bits to describe the state of the system.  To increase syntactical information you must increase the randomness of the system.  This is very counter intuitive.  What this entails is that specificity, the very aspect that Dembski associates most with intelligence, according to your definition is the mark of low information content, and, metaphorically stupidity.  It seems that according this understanding of syntactical information, CSI are not complex, but simple. Dembski is clearly more correct in this regard.  If syntactical information is the wrong measure, it must be a measure that finds order in complexity.  What must be distinguished here is what is called disorganized complexity and organized complexity.  In order to distinguish the two you must take into account more than the number of possible states available to the system.  I don’t see how algorithmic complexity (Komolgorov) helps in this regard.  I can easily see two systems with the same algorithmic complexity having vastly different organized complexities.  It is organized complexity that is related to information.  It doesn’t seem that by simply randomly varying some genetic material, as in antibody generation, that we are taking a given complexity and organizing it to a greater extent.  The antibody generation system as a whole is “brilliant,” but the antibodies generated have no more organization than was already extant in their creation.  According to this view, new species, or new antibody generating systems indicate increasing information.  According to this view, the generation of a living organism, since it represents the  greater coordination of complex states, increases information.  It is not clear to me that according to physical syntactical information, as you’ve defined it, the inception of living organisms would increase information.  I must be getting this all wrong because I can’t believe that your approach could be getting it this wrong.  So I’ll stop and let you show me where I’ve gone astray.

    • Randy Isaac

      Bill, you’re letting common sense get the better of you! Yes, information theory, like quantum theory, has a lot of features that are counterintuitive. As you correctly point out, there is a peculiar aspect of information in the syntax category when one consider Kolmogorov-Chaitin, or algorithmic complexity. The highest information content, in that usage, is the most random sequence. The most orderly sequence has much lower information. But when it comes to usefulness, neither the most random (highest amount of information) nor the most orderly (lowest amount of information) is the most useful. Some theorists have tried to introduce other measures to indicate usefulness. Charles Bennett has proposed a concept of logical depth, for example, but it isn’t particularly easy or useful!
      In DNA, there are many repetitive sequences that aren’t particularly useful and also many nearly random sequences that aren’t so useful. Maximum usefulness is somewhere in between, and that’s where a lot of DNA is, like the coding regions.
      You seem to be focused on an “increase” in information. That differs widely depending on what definition of information and in what category you are talking. From a capacity point of view, DNA information increases when the length of the DNA increases. That’s fairly simple. From a syntactical point of view, there’s no particular value in talking about an “increase” in information. Generally, a change in syntax generates “new” information, not in the sense of increasing the number of bits of information but in changing the sequence to a new sequence. Some sequences turn out to be useful and some don’t. That’s where natural selection comes in. And that’s where the higher levels of organization get established and form the basis for the next step.
      Another way to think about it is that through the reproduction with variation process, organisms are search in DNA information space to find useful regions of survivial. That process can lead to an increase or decrease or a constant number of bits of information, but much more importantly, it can find new sequences that are more useful. That is “new” information, even though it might not be a technical “increase” in information in the engineering sense.
      In the antibody generation process, the number of nucleotides is generally constant. What is varying is the sequence. The system uses random variation to search for the information states that are the most useful in identifying and attaching to antigens. Finding the right one is what ID folks call “specificity.” Complexity just means that the sequence isn’t trivial or repetitive. It is an increase in the right sequences, not an increase in the number of bits.
      Keep asking. These are not easy concepts to grasp.
      Randy

  • William Powers

    Randy: I’m not the one focusing on information increase.  It’s what everyone else wants to talk about.  It’s behind Dembksi’s claim that it can’t increase by “natural” methods.  Story mentions it, as does Watts, and I think Freeland.  I would think that “new” information entails and increase in information.  It not, then why care that it’s new?  If all that’s happening is that we are rearranging by “natural” means wooden blocks on a table, I don’t think it would be very interesting.  The idea appears to be that natural forces unintelligently and randomly move “components” around.  Then it runs through the survivability  filter or some “goodness of fit” criteria.  Intelligent search strategies do the same.  I have no immediate problem with this picture.  The problem I have, and have always had, is I don’t see how talking about information is helpful.  What Dembski and many others want to get at is the probability, given “natural” processes, for some biological state to be achieved.  In order to answer this question not only must you list the available processes, but you must also have a handle on the density of states that survive.  If the distance between survivable states is large, the density decreases and the probability of moving from survivable state to another drops significantly.  This approach is at least in principle straightforward.  The information approach appears hopelessly confused.  What ID is after is a mark of intelligence.  I think that they roughly have this right.  Unfortunately, it seems, their understanding of the problem cannot be sufficiently quantified.  As result you and others introduce quantifiable measures, but, as far as I can tell, they are poor measures of what, at least Dembski is after.  As a result, ID folks and the folks in this issue of PSCF are not talking the same language.  Freeland attempts to describe very crudely a natural method for how life might arise.  He suggests that we began with a simpler alphabet, and that the relationship between symbol and meaning are not as physically arbitrary as they are in human languages.  He doesn’t really answer any of the detailed questions required, but at least he’s talking about the right subject.  Well, I better stop.  Thanks for putting up with my questions.

  • Randy Isaac

    Very good summary, Bill. I would only quibble with your statement that “As result you and others introduce quantifiable measures…” No, Shannon and subsequent information theorists are the ones who really began to talk about quantifying information and figuring out how to measure it and understand it, long before ID came around. Leslie Orgel in the ’70’s and then Dembski in ‘97 tried to extend it to “complex specified” information, leaving the rigor of information theory and moving to semantics. As you point out, it doesn’t work.
    I agree with you that talking about information in the context of origin of life is not very helpful. The ID community has won widespread applause from the Christian community by claiming that “DNA information can only be generated by an intelligent source.” My main point is No! My article tries to clarify that there are many different uses of the term information and most discussions of information confuse various uses.
    As for “increasing” information, there is a valid qualitative perception that there is more useful information in a set of functional configurations than in a set of non-functional configurations, even if in a rigorous engineering sense, the quantitative measure of information indicate there may be less information. The point is the relevance of the information. That’s really what ID folks are trying to get at with CSI. So “new” information doesn’t need to be a measurable increase in bits, just a sequence that works. Rearranging the “wooden blocks” as you put it, is indeed interesting when it makes the difference between something that survives and works and something that doesn’t. That concept is indeed very valuable in understanding how evolution proceeds.
    I think everything in my paper is not new, just an attempt at summarizing what has been known in the information theory world for a long time. Perhaps the only aspect that is slightly new, is the idea that the mark of intelligence in information is the connection of abstract relationships. I contend that none of the ID advocates has offered a connection between intelligence and information, relying solely on something like “in our experience, information is habitually associated with intelligence.” But why should this be true? I suggest that the ability to carry out abstract reasoning has long been a hallmark of intelligence. It follows, therefore, that information that is characterized by abstract symbolism would also be necessarily related to intelligence. This is not a unique relationship since information not connected to abstract concepts can also be related to intelligence, but if you are looking for a mark of intelligence, abstraction is one possibility. Such abstraction is not evident as a signature in the cell.
    Randy

 

November 2011
M T W T F S S
« Aug    
 123456
78910111213
14151617181920
21222324252627
282930  

Email Notification for Post