I just finished reading a paper that is both fantastically interesting, and a little disheartening. It is disheartening only because I thought that my senior paper for seminary was going to be freshly novel, but it turns out that someone else already made 90% of my arguments 11 years ago, and actually made most of them better than I could. The paper is "Algorithmic Information Theory, Free Will, and the Turing Test" by Douglas Robertson (Complexity 4(3): 25-34).
Here are some quotes from the paper (note that AIT is "Algorithmic Information Theory"):
"...free will appears to create new information in precisely the manner that is forbidden to mathematics and to computers by AIT" (26)
"There would be no reason to prosecute a criminal, discipline a child, or applaud a work of genius if free will did not exist. As Kant put it: "There is no 'ought' without a 'can'" (26)
"A 'free will' whose decisions are determined by a random coin toss is just as illusory as one that may appear to exist in a deterministic universe" (26)
"AIT appears to forbid free will not just in a Newtonian universe, or in a quantum mechanical universe, but in every universe that can be modeled with any mathematical theory whatsoever. AIT forbids free will to mathematics itself, and to any process that is accurately modeled by mathematics, because AIT shows that formal mathematics lacks the ability to create new information." (26)
"The fundamental secret of inspired mathematical practice lies in knowing what information should be destroyed or discarded, and what rearrangement of available information will prove to be most useful." (30)
"The very phrase "to make a decision" strongly suggests that the information is created on the spot." (31)
"If...we do accept this definition of free will, then an immediate corollary from AIT is that no combination of computer hardware and sofware can exercise free will, because no computer can create information." (31)
"There is perhaps no clearer demonstration of the ability of free will to create new information than the fact that mathematicians are able to devise/invent/discover new axioms for mathematics. This is the one thing that a computer cannot do. The new axioms produced by mathematicians contain new information, and they cannot be derived from other axioms. If they could, they would be theorems rather than axioms." (31)
"it has long been accepted that free will is impossible in a Newtonian deterministic universe. But now the impossibility is seen to carry over into all possible physical theories, not just Newtonian theories, because it is inherent in mathematics itself. According to AIT, no physical model (i.e. no mathematical model for a physical process) can allow the creation of information. In other words, free will is impossible in any physical universe whose behavior can be accurately modeled by a computer simulation." (33)
"All theory is against the freedom of the will; all experience for it" (33 citing Samuel Johnson)
"The idea that all physical processes can be modeled is an assumption that is so deeply ingrained in physics that it is seldom questioned, seldom even noticed." (33)
"It may be that physicists since the time of Newton have been exercising a careful (but generally unconscious) selection proess. Physicists may have studied only those physical processes that happen to be susceptible of mathematical modeling. This would immediately explain the reason behind Eugene Wigner's famous remark about the "unreasonable effectiveness of mathematics." But if it should turn out that many physical processes are not susceptible to mathematical modeling, just as nearly all numbers cannot be expressed with any mathematical formula, this would represent as deep a shock to physics as Godel's theorem was to mathematics, and one that is far greater than the shock that resulted from the loss of Newtonian determinism when quantum mechanics was developed or the loss of Euclidean geometry when general relativity was discovered." (34)
"The possibility that phenomena exist that cannot be modeled with mathematics may throw an interesting light on Weinberg's famous comment: "The more the universe seems comprehensible, the more it seems pointless." It might turn out that only that portion of the universe that happens to be comprehensible is also pointless" (34)
"The existence of free will and the associated ability of mathematicians to devise new axioms strongly suggest that the ability of both physics and mathematics to model the physical universe may be more sharply limited than anyone has believed since the time of Newton." (34)
Mokele-Mbembe is the name of a sauropod-like cryptid (a suggested but not confirmed living creature) living in the jungles of the congo. Bill Gibbons has gone on many expeditions searching for this creature, and has spent much of his life researching and going on expeditions. Now he has a new book out recounting his expeditions and encounters in search of this creature.
A friend of mine sent me a link to a Creation museum that is currently being built.
About two years ago I got my first major paper published in the Creation Research Society Quarterly. You can read a summary of it here. Today, I received permission to post the paper publicly on my website! Yay! If you are interested, check out the paper:
Let me know what you think!
As mentioned earlier, Dr. Faulk and Stephen Meyer are having a real debate on the merits of ID. Dr. Faulk gave what he believes to be a disconfirming example of ID's arguments. My response, which is also in the comments is below. In addition, some earlier comments of mine on randomness might be interesting, including:
Dr. Faulk -
I take issue with your description of the processes of antibody diversity generation. While there is some statistical randomness at play, I would say that the specificity in the process is huge. The parts of the antibody gene are segregated into matchable parts (V, D, J, and C), which are rearranged in specified ways, whose rearrangments are managed by the RSS signal between each part. In addition, after recombination, the cell can generate DNA which are needed to make the final protein fold better (Sanz and Capra PNAS 84(4)).
During the mutation afterwards, the mutations are focused on that gene only, and, for that gene, it focuses on the complementary-determining region and skip the C region (which attaches to the B cell, and thus would be counterproductive to mutate) (Papavasiliou and Schatz Cell 109(2 supplement 1)).
To call this orchestration "random" just because it isn't 100% deterministic is an abuse of the term. It has never been the position of ID that nothing can find a solution within a search space which _utilizes_ randomness. But rather that this only works when the search space has already been narrowed by information. This process works only because, rather than mutations happening at random throughout the cell's DNA, they only happen within a well-defined scope - a scope that _matches_ the environment problem that it is trying to solve.
This is the focus of Dembski's work on Active Information, started with his No Free Lunch book and continuing in the papers he has done with Dr. Marks.
If the process were not so constrained, it would not work. This is the results of not only the work on the immune system, but also those of bacteria - when you mess up the genes in the SOS pathway, evolution does not occur. The evolutionary definition of randomness is that "one of the central tenets of Darwinian evolution is that mutations are random with respect to the needs of the organism in coping with its environment" (Templeton, "Population Genetics and Microevolutionary Theory", 2006, pg 3).
Well, your example is actually one that contradicts this statement - the gene which is modified is not random with respect to the needs of the organism, and neither is the are of the gene which is mutated. This is excluded well over 99.99% of the genome. How a mutation directed to the correct 0.01% of the genome is considered "random with respect to the needs of the organism" just because, within that 0.01% there is some variability, is completely beyond me.
I am not a huge fan of BioLogos, primarily because they don't seem to have a firm grasp of the philosophical and theological impact of their ideas. However, they did something remarkable today in the world of evolutionary advocates - they let Stephen Meyer have his full say, unedited, on their blog, to respond to criticisms they have made of him.
If only the rest of the scientific establishment would be so intellectually honest. Most ID'ers are roundly criticized in academic journals without being given adequate (usually not any) space for response. It is good to know that fellow Christians, even those whom we vehemently disagree with, can operate honestly with each other even when the rest of society frowns upon such activity.
The gospel is basically a love story. In fact, in many ways, the scriptures directly make the comparison. In Revelation, the Church is the bride of Christ. God is continually seeking after his people. But, unfortunately, our hearts are often hard.
As a Creationist, my heart is God's. Paul Garner's heart is probably more in the right place than my own, when he says,
I’m not at all interested in trying “to defend a literal reading of Genesis with scientific principles”. Rather, I accept the truth of creation by faith and investigate the world scientifically with that presuppositional basis. That’s not to say that I’m uninterested in evidence, just that my aim in scientific investigation is not “defending Genesis” or “proving Scripture”. I don’t think the Bible needs that kind of help. (see here in the comments)
The goal for Paul is to live faithfully. And part of that living faithfully is finding new things in God's creation, and using God's scripture as a starting point for all things.
This used to be a common theme in science. Newton, for instance, was foremost a theologian. Kepler wanted to enter the ministry, but could not. But he said, "God is the beginning and end of scientific research and striving".
Sadly, this thought has been lost. But God, now like before, still seeks us. I have been realizing more and more that God often seeks scientists - and, I believe, leads them to discoveries which show God's handiwork - whether or not the scientists are willing to follow.
A case in point is Francis Crick. Crick said that the reason he went into science was to disprove religion. But God cared for Crick too much. God, I belive, helped Crick in his scientific search, to see His creation for what it is. And that's when Crick discovered DNA. DNA was certainly a stumbling block for Crick's atheism. The implications of Crick's discovery, while possibly not immediately obvious to the rest of us, was in fact immediately obvious to Crick - the naturalistic story of its origin just doesn't measure up. In his book, Life Itself, Crick says, "An honest man, armed with all the knowledge available to us now, could only state that in some sense, the origin of life appears at the moment to be almost a miracle." So what was Crick's solution? Sadly, it was not to turn to God in any way. Instead, he proposed that the aliens did it. And, thus, he was freed from looking too deeply into the evidence that God had shown him.
Hoyle's view seems to be similar to Crick's, though I have not done as much research into it. Hoyle, though he did not believe in God, remarked that "the Universe is a put-up job!" Meaning, there is just too much intricacy to the design of the laws of the universe. Likewise, for life, Hoyle thought that we were created by another intelligence within the universe.
Stephen Jay Gould had this happen to him, too. Here is how he describes the arthropods of the Burgess Shale:
Imagine an organism built of a hundred basic features, with twenty possible forms per feature. The grabbag contains a hundred compartments, with twenty different tokens in each. To make a new Burgess creature, the Great Token-Stringer takes one token at random from each compartment and strings them all together. Viola, the creature works - and you have nearly as many successful experiments as a musical scale can build catchy tunes. The world has not operated this way since Burgess times.
Now, obviously, Gould put an evolutionary spin on this. Nonetheless, it appears that Gould was looking on in amazement at the way in which the creatures of the Burgess were put together. It is almost as if he was seeing creation, but not being able to quite admit to it.
What I'm saying is that, when you look at the major discoveries in science by the ones who research them, what you find is God searching for the scientists. That doesn't often come out in media reports or even in reviews of scientific literature or textbooks. Few textbooks report that Crick viewed DNA as a stumbling block to the origin of life, or that Gould saw a menagerie of form in the Burgess shale too wonderful to describe, and a pattern unlike anything which is thought to evolve today.
But, I believe, God is seeking each one of us. I only hope that the current generation of scientists are faithful to the call that God gives to them, and that they can be humble enough to reject the pride which plagued the science of the 19th and 20th centuries.
And, hopefully, my own work will reflect faithfulness instead of pride. But I'm still working on that.
Snelling has just released a new two-volume set called Earth's Catastrophic Past. I've been waiting for this for well over a year, and I know some have known about it for even longer. Anyway, it is now available for order from ICR!
This book is supposed to contain a summation of the incredible amounts of geological research that has been done by creation geologists over the past half-century, and provide a framework for understanding geology in the context of the Biblical flood.
Thanks to Paul Garner for letting us know!
January 6th was a big day for me. Huge, actually. I had finally gotten a paper published (titled "Irreducible Complexity and Relative Irreducible Complexity: Foundations and Applications") that I had been working on for the last 3.5 years. You might be wondering why it took me so long to come out and announce it. The reason is simple - most people who have read it misunderstood what I was trying to say. Therefore, I wanted to take the time to explain the points I am trying to get at in the paper, and a little personal history on how it came about.
In the 2006 BSG meeting, I was a complete unknown. I knew absolutely no one from before the meeting. I had come to do a presentation over some interesting overlaps between computer metaprogramming and the way that antibody genes rearrange themselves.
At the meeting was a reporter, who was doing a book on the intersection between fundamentalism and science. Between meetings the reporter would ask various people questions about their beliefs, and what followed was usually a stimulating conversation. At one of these conversations, we were talking about evolution, and I (perhaps naively) stated, “it is impossible for new information to be generated by evolution”. One of the other creationists in the conversation quickly retorted, saying that I was absolutely wrong, and strongly implying that I was foolish for even saying so. Those of you who are in creation research could probably guess who this was.
I thought this was odd (both the idea that natural selection might offer a way to create information by itself and that I was so roughly thrown under the bus by a fellow creationist). And pondered it in the back of my mind for awhile. Did I know for sure that information could not be created? How did I know this? The idea that information could not be created by natural selection seemed correct to the engineer in me, but was this really correct?
Another event hit upon the same question. I had recently purchased a copy of the 1984 Oxford Union debate between Arthur Anderson, A. E. Wilder-Smith, John Maynard-Smith, and Richard Dawkins. I thought that the creation side (Anderson and Wilder-Smith) was well-argued, save for one detail. Dawkins (I think) came up with an example of information being created (I forget what it was), and Wilder-Smith (I think) argued that this was not an instance of information being created, but rather of already-existing information being merely “shuffled around”. Dawkins retorted that since there were only four nucleotides available, all information in the genome arose through “shuffling around” of genetic information. While my intuition sided with Wilder-Smith, I realized that his argument hinged on a separation between the creation of information and the rearrangement of information.
Again, I intuitively agree with Wilder-Smith’s assessment. Similar things happen in the rearrangement of antibody gene parts for the creation of novel antibodies. About 90% of the work comes from shuffling well-defined pieces of information around, and about 10% of the work comes from a series of focused rounds of mutations. It seemed that most of the information was already existing, and merely being shuffled around. However, the problem was that there was no objective way of making the assessment between something being created and something being rearranged.
This reminded me of an old friend of mine from my days at Wolfram Research, Chris Knight. Chris was a computer genius. I met him when I had just graduated from college and he was just turning 16 -- and he was light years ahead of me in computer programming skills. One thing that enamored Chris (which enamors a lot of computer science types) is the idea that there is not a clean separation between computer code and computer data. Computer data can be treated like a code. And computer code can be represented as data for manipulation. There are even some languages, such as Scheme and LISP, which elevate such intertwinings of code and data into an art form.
The distinction between “information shuffling” versus “information creation” is similar to the distinction between “code” and “data”. No person doubts that data can be created without intelligence, but can code? The idea that code and data can be intertwined so much easily leads a person to conclude that there is no such divide. But yet, the ability to apply this usefully seems limited to only cases where the code/data is very simple. But yet, I could not yet see the dividing line between the two.
When I was at Wolfram Research, Stephen Wolfram was just about to finish his magnum opus, A New Kind of Science (hereafter referred to as NKS). At the time, I was uninterested. His work in cellular automata did not seem to have any impact on my life and work, so I basically ignored NKS for the time I worked there. But later, I picked up a copy from the library. And in those pages, I found the answer to my dilemma.
For the gory details, you can see my paper. But here’s what I want you to think about. Imagine a computer program. There is a difference between these levels of customizability within a program:
What you see here is not just a rising level of configurability, but also a rising level of intelligence required for making the configurations. #1 can be configured by an idiot. #2 can be configured by trial-and-error, and #3 can be configured only by a professional (or perhaps it would be more exact to say that there are aspects of the configuration which can only be used by a professional).
It turns out that there is a type of programming system called a Universal computer. What makes a Universal computer interesting is that it can, given the right program, compute any computable function. So it is open-ended. Here’s the other interesting insight -- Universal computation only arises in chaotic environments. What makes this so interesting is that a chaotic system, in general, does not give gradual output changes to gradual changes in its programming. Therefore, to have something “evolve” on a Universal computer it would, by necessity, have to make several leaps to work. In order to get smooth output changes, which are required by natural selection, one would have to propose a coordinated system of changing the code - something not allowed by naturalistic scenarios, because the changes would have to be coordinated to match the desired gradualistic output.
This provides an answer to my question about information creation versus information shuffling. If the input domain is open-ended - that is, it is flexible enough to hold the solution to any problem given the right code - then the solution cannot be reached by gradual configurational changes alone, because that is the nature of the way Universal computers behave. Now, you can design a programming systems where gradual changes to the code lead to gradual changes in the output, and as such would be open to natural selection. However, these are not Universal computers, and therefore the potential range of results is not open-ended.
Thus, the dichotomy is not necessarily between code and data, but between parameterized programming systems and open-ended programming systems. If the system is parameterized, then change only happens within the specified parameters. There may be genuinely new things happening there, but the parameters for their occurrence were specified in advance. Thus, you can see that the common ID and Creationist claim that “information cannot be created by natural selection” is both true and false. It is true that open-ended information cannot be created, but if the solution domain is appropriately parameterized, then information can arise within those parameters.
Obviously, this is not a rigorous proof, and if you want a more nuanced version, you should refer to the paper. But nonetheless, I think that this should give you an idea of the questions that I was attempting to answer and the approach that I took.
There’s a lot more to say about this, but I think this is enough for now. See the paper for a lot more information, as well as numerous applications. I especially liked how this related to the evolutionary software Avida in section 3.4. In any case, this background should help you make sense of what the paper is about and where I am going with it, should you decide to read it.