Recent Peer-Reviewed ID Paper

One of the things I have been critical of concerning the Intelligent Design movement was it’s lack of peer-reviewed papers. It was with great interest, therefore, when I saw an announcement of a new peer-reviewed paper co-authored by William Dembski: Montañez G, Ewert W, Dembski WA, Marks II RJ (2010) A vivisection of the ev computer organism: Identifying sources of active information. BIO-Complexity 2010(3):1-6. Before I get to the substance of the paper I was disappointed in the “forum” for this paper. It’s a brand new journal that has three research papers and one so-called critical review.

Even though I use search in an engineering context for my job, I recognized none of the terms used. Sure enough, all of the terms of art were coined by Dembski et al and if you do a Google search on “information conservation” or “software oracle” you find the rest of the computer science and engineering community is completely ignoring this supposedly groundbreaking work in the area of search. If you think search is not important, think Google. The footnotes for each of these terms refer to papers written by Dembski. Then when you looked at who cited these papers it’s the same people! It’s this kind of self-plagiarism that got Ward Churchill fired from the University of Colorado. While Churchill didn’t footnote his self-citations I seriously doubt most people would do what I did, running down the footnotes and searching for (often less than 10) citations in Google Scholar. The reason why many of us are so strident about peer review with ID is we get the sense that they are way too insular and don’t interact enough with the wider academic community. The result is the same paper gets published over and over again. This paper is no exception.

The thesis of the paper embodies a false dichotomy.

Algorithms that conduct even moderately sized searches require external assistance to be successful. When such an algorithm produces apparently impressive results, conservation of information [1-3], including the No Free Lunch Theorem [4-11], dictates we are faced with one of two possibilities. The first is that the search problem under consideration is not as difficult as it first appears. At times, the problems solved by seemingly complex algorithms can appear extremely difficult whereas a closer inspection reveals the search is relatively simple and, from a random query or exhaustive search perspective, has a larger probability of success than implicitly supposed. The other alternative, for difficult problems, is that active information has been inserted in the search program to increase the chances of success. 1

1 Formally, active information is defined as -log2(p/q) where p is the probability of success for an unassisted search and q is the probability of success for an assisted search. Informally, it is the amount of information added to the search that improves the probability of success over the baseline search.

Search is affected by more than the two factors of a trivial search or so-called active information. Computational complexity gives a way to judge search algorithms. In essence, the computational complexity of the algorithm tells us how fast it runs. This is expressed by so-called Big O notation. A linear search is O(n) and a binary search is O(log n). But that’s not the end of it. Search algorithms also need to have to be matched to a search space. Order log n search algorithms need the search space to be sorted or hashed. Think how long your Google search would take if it literally had to search every web page out there. In short, the search algorithm needs to fit the search space in order to be efficient.

Thus, the question in front of us is whether genetic algorithms can efficiently search for optimal protein binding. That’s what ev looked at. It showed that, yes, genetic algorithms do search such spaces efficiently and furthermore it disproved Dembski’s so-called law of conservation of information as the very paper cited by Dembski et al showed that information increased. But Dembski et al claim that the information increase came from some “help” given to the algorithm. Let’s see if that really is the case and go ad fontes (to the source code).

Here’s how the algorithm chooses between bugs.

/* begin module ev.score */
Static double score(w, width, creature, x, e)
wmatrix *w;
long width;
uchar *creature;
long x;
everything *e;
/* Evaluate the w matrix placed on the genome at position x.
Note: x+1 is the first base touched by the w matrix. That is, x is
the zero of the w, and the first base of the w is at x+1.
Note that the score function does not imply how the recognition
occurs. That is done by the recognize function
. */
double realval = 0.0; /* the value of the site evaluated by w */
long integerval; /* the value of the site evaluated by w */
long l; /* index to w matrix */

if (!e->noise) {
integerval = 0;
for (l = 0; l matrix[P_getbits_UB(creature, l + x, 1, 3) - (long)a]
return integerval;
e->noise = false;
/* */

/* end module ev.score version = 2.50; (@ of ev 1988 oct 6 */

/* begin module ev.recognize */
Static boolean recognize(w, width, creature, x, e)
wmatrix *w;
long width;
uchar *creature;
long x;
everything *e;
/* Evaluate the w matrix placed on the genome at position x.
Note: x+1 is the first base touched by the w matrix. That is, x is
the zero of the w, and the first base of the w is at x+1.

The test for recognition is that score value is greater than the threshold. */
/* */
return (score(w, width, creature, x, e) >= w->threshold);
/* */

/* end module ev.recognize version = 2.50; (@ of ev 1988 oct 6 */

/* begin module ev.evaluate */
Static Void evaluate(e, bug)
everything *e;
long bug;
/* evaluate the particular bug and put the number of
mistakes into the rank array. */
wmatrix w; /* the recognizer; translation of the gene */
long m = 0; /* the current number of mistakes counted */
long x; /* position on the genome */
boolean recognized; /* a site was recognized at x */
genometype *WITH;
long FORLIM;

/*writeln(output,'in evaluate, bug: ',bug:1);*/
/* translate the recognizer gene into the w matrix */
translate(e, bug, 1L, &w);
/* scan the genome */
WITH = &e->p.creature[bug-1];
FORLIM = e->precalc.endscang;
for (x = 0; x p.width, WITH->genome, x, e);
if (P_getbits_UB(e->p.sitelocations, x - 1, 0, 3) != recognized)
/* record the results */
e->p.rank[bug-1][(long)bugid] = bug; /* identify the bug */
e->p.rank[bug-1][(long)mistakes] = m;
/*;writeln(output, 'bug ',bug:1, ' makes ',m:1,' mistakes');*/

Note how the perceptron is not part of the algorithm itself. It’s just a fitness function for the (fairly pedestrian) genetic algorithm which counts binding mistakes and does selection based on bugs that have fewer binding mistakes surviving more often than those who don’t. There was a special rule for ties that Dembski objected to in 2001. There are parameters in the code which turn that rule off. No “information” was added to the algorithm itself. Rather, when Dembski et al claim that the so-called Hamming Oracle and the perceptron add information they are in effect saying the fitness function adds information and not the genetic algorithm used by ev.

The success of ev is largely due to active information introduced by the Hamming oracle and from the perceptron structure. It is not due to the evolutionary algorithm used to perform the search. Indeed, other algorithms are shown to mine active information more efficiently from the knowledge sources provided by ev [13].

The question that should be asked is does the fitness function used in ev accurately model protein binding and does protein binding affect evolutionary fitness? If it does, then Dembski et al have just proven that natural selection adds active information along with disproving the conservation of information!

I’ll close with a description of the perceptron so that the reader can determine whether or not protein binding really is accurately modeled. Again note when Dembski made a comment on the code and the date of the paper we are discussing here. Perhaps the biologists here could comment on that more relevant question.

A population of evolving creatures is modelled. Each creature
consists of a genome made of the 4 bases. All creatures have a
certain number of binding sites, and the recognizer for the sites
is encoded by a gene in each genome. The genomes are completely
random at first. The recognizer of each creature is translated
from the gene form to a perceptron-like weight matrix. This matrix
is scanned across the genome. The number of mistakes made is
counted. There are two kinds of mistake:

how many times the recognizer misses a real site


how many times a non-site is detected by the recognizer.

These are weighted equally. (If they were weighted differently it
should affect the rate but not the final product of the model.)
All creatures are ranked by their number of mistakes. The half of
the population with the most mistakes dies; the other half
reproduces to take over the empty positions. Then all creatures
are mutated and the process repeats.

The integer weights of the recognizer are stored as base 4 numbers
in twos complement notation. a=00, c=01, g=10, t=11. If 'bases
per integer' were 3, then aaa encodes 0, acg is 6, etc. txx and
gxx (where x is any base) are negative numbers; ttt is -1.

The threshold for recognition of a site is encoded in the genome
just after the individual weights. It is encoded by one integer.


SPECIAL RULE: if the bugs have the same number of mistakes, reproduction
(by replacement) does not take place. This ensures that the quicksort
algorithm does not affect who takes over the population, and it also
preserves the diversity of the population. [1988 October 26] Without
this, the population quickly is taken over and evolution is extremely

[2001 June 6] In response to William Dembski's objection (see below for a
link) that this rule is inserting information into the results, a new
parameter is now available to turn off this rule.

What does this rule mean? It means that in a duel it is possible for two
bugs to have a tie. This is not an unreasonable result, and it frequently
happens in the natural world! What does removing the rule mean? It means
that the bugs will duel to the death, even if it is an arbitrary death!
Clearly this cannot affect the overall evolution, but it might slow things
down. Indeed it is interesting because it is often the case that in
fights the combatants are not killed.

7 comments to Recent Peer-Reviewed ID Paper

  • Wayne Dawson

    At the risk of being misrepresented as badly as Martin Gaskell was (even by the NYT), I’ll comment anyway.  
    The journal has a diverse list of individuals on the editorial board, including at least one individual who is an ASA member.  The journal states in its purpose that
    BIO-Complexity is a peer-reviewed scientific journal with a unique goal. It aims to be the leading forum for testing the scientific merit of the claim that intelligent design (ID) is a credible explanation for life. Because questions having to do with the role and origin of information in living systems are at the heart of the scientific controversy over ID, these topics—viewed from all angles and perspectives—are central to the journal’s scope.
    However, since one of the editors and the “critical review” are from the DI, and Dembski is also on the board and has a published paper in it, it does compel some suspicion….  Supposedly, you can submit critiques of articles to this journal, but they reserve the right to remove it.
    This does seem to stem from the blessings and the curses of open access.  
    Some legitimate science doesn’t get heard because of politics in academia.  Planck wrote of frustrations of this sort when he was trying to get fellow scientists to accept the principle of irreversibility in the second law, “the only way to get revolutionary advances in science accepted is to wait for all the old scientists to die”. This he said only to become a problem himself in his old age, when he vigorously opposed Quantum Mechanics.[1]   So the wisdom gained of bitter experience did not ultimately engender a better man out of Planck.  We cannot change human beings, but maybe open access journals offer a way to help free up new ideas before the old scientists die.  So this is definitely a very good side of open access.
    However, just as open access allows some people to be heard who should be heard, in fact, because it is easy enough to build a server like this, it allows everyone to be heard.  All articles, regardless of where they are published, come with a Caveat emptor.  At least independent peer review does offer a rough safety net for assessing the quality.  But it does depend on the dedication of the reviewers to do honest and constructive reviewing of another author’s work.  Just the volume of manuscripts makes it difficult to get that kind of quality peer review, even in the best journals.   Less familiar journals will suffer more.
    What strikes me as a bit lacking in Bio-complexity at present, is that some of the key members of the editorial board are also the authors, and there is no question that they are presenting controversial topics.  The editor has some option here, but the aim is to reach the science community.  If this is to spark some discussion, maybe that is good, but I would still think it is better if they find at least some way to get their ideas published in a journal where the views of people who do not agree with them can be expressed, should they chose to do so.  The value of third party agreement is that it takes some of the bias off of the topic, even if the third party cannot say for sure what the real truth is or even if the third party doesn’t agree.  By opening up their own journal, it can be taken to mean that they plan to open their own form of an ICR or AiG press.  After all, AiG is also purportedly “peer reviewed”. An insular environment does little to advance ideas.
    By Grace we proceed,

    [1] Muller, I (2007). A history of Thermodynamics: the doctrine of energy and entropy. Springer-Verlag,Berlin, pp. 209-210



  • Richard Blinne

    Wayne, I view the fact that they have this journal as a net positive over the previous behavior of only publishing “results” in popular books, videos, and speaking engagements. It allowed me to show the specific flaws in their thinking.
    Let me give a further example. Take the following paper in the December 28, 2010 PNAS.
    “But this argument in effect assumes an “in series” rather than a more correct “in parallel” evolutionary process. If a superior gene for (say) eye function has become fixed in a population, it is not thrown out when a superior gene for (say) liver function becomes fixed. Evolution is an “in parallel” process, with beneficial mutations at one gene locus being retained after they become fixed in a population while beneficial mutations at other loci become fixed. In fact this statement is essentially the principle of natural selection.”
    The K^L complexity of evolutionary search (instead of K log L!)  is implied in Edge of Evolution by assuming the random variables were statistically independent where Behe wrongly concludes that you couldn’t evolve drug resistance in the Plasmodium parasite. For the paper in question this implicit bad argument is made explicit in Equation 12. That’s the joy of peer-reviewed papers because they need to be more explicit than popular works for a lay audience.
    This is why even though their process is far from perfect I’m glad the paper passed the peer review process. When we consider the peer review process, the universe of the peers is not merely the referees and the editors, it’s also the larger scientific community. Looking at their peer review process it looks mostly OK with the relatively minor quibbles of potentially censoring negative commentary (which people can do in other journals) and misrepresenting the expertise of the experts. This can result in a “leaky” gatekeeper. But, so what? For over a decade and a half the ID community whined that their lack of scientific productivity was that they were “expelled”. Now they don’t have that excuse and if they keep publishing the same paper over and over and don’t have any new ideas it’s just evidence that the movement has run out of intellectual gas rather than a group of oppressed geniuses.

  • William Powers

    It is not clear to me that the approach presented here is of much use for the ID program.
    What is required for ID is to be able to infer from an object that it is a “meaningful object,” that is, that the object is intentional, it is about something else.  This is so because design is an intentional exercise.
    Instead, what discussions such as this involve is trying to test the limits of what a non-intentional, meaningless material world is capable of.
    So they end up using definitions of “information” that are for the most part irrelevant.  What they are trying to detect is meaning, instead they employ measures of t he capacity of a physical system to encode meaning.
    bill powers

  • William Powers

    I know this is a stale thread, but the topic is forever of interest.  So perhaps someone will pick it up.
    Just one short question:
    In using some quantitative measure of “information” like Shannon’s how do we distinguish the complexity of a protein molecule from that of the language of protein transcription embedded in the DNA and auxiliary cellular components?  It seems to me that these are qualitatively different and it is not clear to me that anything like Shannon information can capture it.  This question, BTW, appears independent of the origin of that “complexity” or “information.”  We could, for example, imagine asking the kind of “information” or “complexity” required to create the program ev and that would be distinct from the complexity of its derivatives.
    bill powers

  • William Powers

    Having just scanned the paper in question, I have an additional comment.
    Dembski et al. raise the issue that I have introduced elsewhere on this forum, viz., when considering the question of “information” or, for that matter, any supposedly conserved quantity one needs to consider a closed system.
    How does one go about considering whether ev or any such surrogate evidences an increase of “information”?  In what sense can we consider ev to be a closed system?  We say that we can construct a physical system and then energetically isolate it.  In this sense we say we have a closed system and various physical principles are expected to abide and regulate the subsequent development of the system (e.g., increase of entropy).  There was undoubtedly great intelligence employed in the initial establishment of the system.  Nonetheless, we say that once the system is isolated that “intelligence” is no longer active.  We may have intelligently arranged all the molecules of a gas in some narrow band (non-maxwellian) of velocities, or located them in some small corner of the isolated containment vessel.  Nonetheless, we expect that according to physical principles the position of those molecules will tend toward an average homogeneous location and the velocities approach a maxwellian distribution.  Indeed, we would say that the entropy of the system increased and the information decreased over time.
    Why can we not do the same thing with ev?  Ev is an encoded procedure in a physical computer.  It begins with the random establishment of the target.  After that the system is isolated until the program stops (dies).  For the sake of completeness we will ensure that the closed system is battery operated.  Somehow we have to now try to discuss the total “information” in this closed system and determine whether this “information” has changed over the time of isolation.  If physics has anything to say about this, it has decreased.
    But what everyone is interested in is not the total “information,” but the “information” of a subset of the closed system.  If we are to make sense of this subset, we will have to be able to somehow establish a closed system.  Now perhaps there are different kinds of closed system.  Can we distinguish, for example, a physically closed system from an “informationally” closed system?  The former kind of closed system we believe to have some understanding of.  What about the latter?  Is the “information” increase or decrease associated with ev a physical entity?
    Ev is a search algorithm encoded on a physical device.  Once we begin considering that there can be a difference between physical isolation and information isolation, we enter into unfamiliar territory.  We know what it means (we think) to isolate a system physically.  What exactly do we mean and how do we ensure that a system is informationally closed?
    I whisper something into the ear of someone and enclose him in a hermetically sealed box along with a dozen other individuals.  Being physically isolated we presume that no additional information can enter or leave the box.  Will or can the information increase?  It seems clear that inasmuch as we think that information on the planet has increased over the past 1000 years, the answer is yes.  Are humans special or would this have been true had there been no humans?  Or is the system not informationally closed (e.g., God inputs information to the system).
    I simply do not know.  Using the term “information” is misleading and perhaps best abandoned.  Speaking of complexity we can imagine a quantitative measure.  I don’t know whether there is any connection between information and complexity.  Indeed, this is why Dembski rejected complexity alone in attempting a theory of ID.  Complexity has to always be examined within the context of resources.  If those resources are “intelligent,” complexity is not surprising.  Where it is dumb, it is surprising.
    So can we think of ev as in some sense informationally closed?  Great intelligence was whispered into the system in the creation of the algorithm and in that of the physical computer.  Then we “informationally” close the lid and see what happens.  Physically, we know (or believe) that the entropy of the system (including battery) has increased and the “information” decreased.  What we want to say is that the informationally closed system consists of algorithm itself.  When we find the target we say the information has increased.
    Well, I give up for now.  This is all too vague for me.  I don’t how to proceed in a clear manner such that I know unambiguously what to conclude.  It isn’t just the difficulty in determining or measuring results.  It’s that I don’t have a good sense of what a clear methodology looks like.  I do with physically closed systems.  We can arrive at some quantitative measure of computational complexity.  Is it the kind of complexity we want to model?  Genetic evolution is optimal in the sense of survivability of the phenotype.  So there ought to be some analogy with a search algorithm.  The system of the actual genetic evolution is at least the planet earth and the sun.  Assuming the system closed in every sense (i.e., no god or external intelligence) is the arrival of life an increase of “information” of the entire system?  In the system considered, I know its entropy has increased and its information decreased.  It seems that we keep wanting to consider subsystems that are not closed; and everyone knows that entropy can decrease in non-closed systems (e.g., refrigerators).
    What I think we are after is some kind of abstract notion of information, even a nonphysical notion of information.  But I can’t be sure.  I guess at this point I need help, or more dedicated thought.
    bill powers

  • William Powers

    Still talking to myself.  Here goes.
    I’ve been pursuing my goal to start all over and take a fresh (for me) look at what information is.
    The more I think about it, the less I like the fixation with probability that has become associated with information.
    Probability is useful, not so much to describe or prescribe information, but as a marker of information.
    If we take probability as too closely associated with information, it seems we must conclude that information is a state or event that is unlikely for a given resource.  It is unlikely that wind and rain would create the depiction of four presidents at Mount Rushmore.  But it is far more likely for humans to have done so.  Information then, just a probability, becomes contextualized to a given resource.  If information is too closely aligned with probability then information would not increase only for necessary events or states (e.g., 2+2=4), but would in our contingent world be continuously increasing, contrary to our understanding of physical decay (entropy).
    Is information increased when I use a calculator to determine the square root of 56.32?  We’d say no, even though it seems that information has been gained by me.  The square root algorithm encoded in the calculator contains in some sense all square roots.  It might just be a look up table with interpolation.  When I use the calculator, I simply find what is in some sense already known.  So it would seem that total information of the me-calculator system has not changed.
    If this is so for a simple calculator, what of a complex computer program?  What if the code is a monte carlo (random) photon transport code?  Could we say the same?  The information is already contained within the algorithm, even while it may take thousands of processors and hundreds of cpu hours to determine an answer.  So information is not increased by the running of the code.
    Pursuing this notion of finding, what if we imagine the entire naturalistic (e.g., godless, undirected) universe as a large computer carrying out its “calculations” according to some established algorithm?  Can it create any new information?  According to what has been previously said, it can find information, but not create it.   Finding, is not creating.  So then were “nature” to stumble upon the creation of a life, it would only be finding what was already there, and not the creator of new information.  If this is so, it would also be true that any naturalistic process, inasmuch as it is only finding, could create no new information, whether that be the mutation of genes that ultimately create a human species or the creation of Parkinson’s.
    But perhaps this goes too far too fast.  Let’s back up a little, holding on to our intuitions, but trying simultaneously to be general.  One of the difficulties with defining or getting our hands around information is that we all think we know it when we see it, but it is commonly and almost always associated with human information.  If we want to generalize the concept and be able to speak about nonhuman information, we will want to keep our intuitions in sight, but try to speak in less than intentional language.
    To increase information, we must already have information.  Increasing information is a building up of previous information.  For information to increase there must be a way of retaining a previous state so that somehow it can progress to another state.  We might think of this as something like constructing a building.  We need lower floors and other parts of the structure to endure through time in order that future states can build upon it.
    Having said this, immediately a confusion arises.  Is the information the building, a physical structure, or some kind of memory or know-how of how to construct the building.  That is, is it content of a “thought” or the “thought” itself, the instance of a plan or the plan itself?  The problem can be demonstrated by considering the invention of the laser.  If we presume that information increased when the design of the laser was developed, what of the very first laser constructed?  Did information increase then too?  I would presume so.  Engineering of the laser required much study, trial and error progress toward the target.  But what of the million-th laser?  Did information increase when the million-th laser was produced?  Is information, then, continually increasing as more and more cars, lasers, and tv’s roll off the assembly line?  I think not.  But if not, then information, and especially the increase of information, has to have something to do with “invention” or something novel.
    If this is so, then information has to not only build upon something that already exists, but that thing that is being built upon is something like a plan.  It is plans that is being built upon.  These plans in order to be built upon must be “operational.”  That is, they must “work.”  So we need a way for information to exist and to increase for operational plans to be saved and later modified.
    As given this is still not adequate, for modification of an operational plan does not necessarily entail the increase of a quality or quantity, but it might just as well decrease.  So we need a measure of increase or decrease.  One measure might be something like complexity.  But in pursuing this measure, we will try to steer clear of probabilistic notions.  Complexity is evidenced by the interconnection of parts into a whole.  All these words are important.  Parts and interconnections are not sufficient.  A plasma gas has countless numbers of interconnections between ionic parts, but might readily be considered less complex than a Behe’s infamous mouse trap.
    There are numerous mine (mind) traps here.  What do we mean by parts?  Are the parts of the mousetraps the four or so conceptual parts or the countless atoms?  Does it matter whether they are iron atoms or some hydrocarbon molecules?  It seems that in speaking of parts we can’t escape the notion of a plan, even something abstract.  When we speak of a “whole,” we mean some kind of integration of parts, this is evidenced by their interconnection.  How ever we define parts of the whole, we must mean that should a part be removed or modified the whole is affected.  We might here distinguish essential parts, such that the removal or modification of a part significantly influences the character of the whole, from nonessential parts.  We might likewise distinguish essential modifications of parts from nonessential.  Changing the clapper of the mousetrap from metal to hard plastic would likely be a nonessential modification of a part. Yet we might think it that the modification of a part that leaves the whole unaffected is, nonetheless, a novel modification and hence an increase of information.
    It is not difficult to imagine a procedure where we (at least conceptually) modify what we mean by parts and what we mean by essential and nonessential modifications to try to determine what exactly the parts of a whole are.  There may still be unresolved difficulties in determining what a whole is or what characteristics of the whole are to count as the significant parts of the whole.
    It is easy to see how probability on the presumption of some resource is of use here.  The probability of constructing a mousetrap by naturalistic resources (law and chance) seems far less likely than the construction of a plasma resonance or instability.  The complexity associated with the interconnection of many parts into a whole is less likely on this measure also.  Probability gets at the notion of a context of resources.
    One last point (for the record), all that I’ve said here ought to make some happy.  It seems that what we mean by life is at least one mechanism of storing information and increasing it.  Information is stored physically in genetics.  This is a kind of  plan that is being stored.  This plan can be and is modified, meaning that it at least has the potential of increasing information.  It would be difficult to not agree that over the course of what is presumed to be evolutionary history that living organisms have not become more complex in the sense that I’ve tried to define it.  So it would seem that information contained within living organisms has increased over time.
    What still might be in dispute is how that information was increased.  Are random mutational processes adequate to account for this increase?  Or must some kind of “intelligence” be involved?  This is where probability and resources becomes an issue.  Here we’ve investigated more the concept of information and its increase than the character of the resources required for its creation and increase.
    On the other hand, viewing the naturalistic universe as a whole, it is a closed system.  The universe then, by definition, cannot be modified relative to its algorithms and plans.  Indeed, this is what we mean by a universe.  We characterize one universe from another according to it’s “algorithms” and “plans.”  A particular universe defines what is possible.  In this sense, unless a universe can be modified, its total information cannot increase; and since it does appear to decay (perhaps what we expect of a closed entity), it’s information would remain constant. Locally, information might increase, e.g., with creation of life, but this is more of a finding of what was already possible.  It is the same with entropy, which locally can and does decrease (e.g., refrigeration).
    Well, maybe some archaeologist will see this.  But by then I will be long since gone and unable to enjoy the benefits of the discovery.  At least I’ve recorded what I think is some progress in my thinking on the subject.
    bill powers

  • Scot Sutherland

    I tend to read through this site a fair bit, but usually refrain from posting unless the discussion is familiar.  I found this discussion so intriguing I will step outside my comfort zone a bit.  The burning question that drives my own work is, “How do people learn?”    Currently I am working on a way to model (measure) the learning process in students of algebra.  When this field of study uses multivariate modeling, it is often called learning analytics.  I have hit upon a different approach that I think my be relevant to this discussion.

    People seem to develop persistent brain activation patterns as a relationship between the sensory and motor neurons which appear to activate simultaneously rather than as a “stimulus-response”.  It appears that these patterns are context specific, meaning similar patterns occur when the person focusses on a similar context,  whether the person is remembering or if they are experiencing a similar situation.  Contextualizing interactive processes means ignoring irrelevant sensory-motor interactions and constraining attention to a specific few, which may or may not exist in the environment at a particular time.  This leads to a more interactional view of learning and thinking as manipulating constraints of interaction within a context, which may have some bearing on the discussion of “information” above.  Without getting into specifics, naming these systems of interaction seems to “objectify” them, optimizing their cognitive usefulness.

    As it relates this this discussion, it seems more likely now that people store ways of interacting with a situation rather than “information”  in a traditional sense.  Some of these ways of interacting are constrained by unchangeable conditions such as gravity, polarity, chemical bonds etc.  Others seem to be a matter of choice (a controversial notion in our field).  Generalizing or abstraction can be thought of as a coordination of systems of interaction across contexts.  Learning seems to be a massively parallel process in which disparate ways of interacting become increasingly aligned and coordinated.  As a student learns algebra they gradually align their own transformation of symbols with a formal system of interactions that define algebra.

    To use a simpler example, recognizing and using a pendulum means perceiving the parts, how they are constrained to each other, and being able to interact in useful, meaningful, ways, such as telling time, measuring earth rotation or acceleration.  Put another way the pendulum is actually the relations between the parts, whether a physical pendulum is present or not.  The pendulum sprang into existence when the the parts were related together (constrained and animated) in a particular way.  This can be either a purposeful action like the building of a pendulum, or a random one, like an unconscious pilot in a parachute caught by a tree.

    So it would seem to me modeling a system, writing an algorithm that aligns with the system, and learning to interact with the system, does not say much about whether the system was purposely built or whether it happened with a degree of chance (or providence) as has been the case with many important inventions.  Perhaps it is much like saying, “The baseball was caught.”  The statement lacks the contextual constraints necessary to know if a fence or an outfielder did the catching.  The algorithm described here seems to me to be a statement something like this.

December 2010
« Nov   Jan »

Email Notifications for Posts