Researching Creation

August 13, 2008

Biological Change / The Case for and Against Genetic Entropy

NOTE - this has been updated slightly.

At the ICC, the GENE team introduced a new genetics simulation tool which has an astonishing number of input variables - it seems to be really well thought out.  The results of the simulations performed on it show Sanford's "genetic entropy" thesis - that the genetic load that comes with mutation far outweighs any beneficial mutations that may occur.

So the question is, why does evolution go downward?  Sanford's conclusions are:

  • Near-neutral mutations cannot be removed through selection
  • Mutation rate is way too high
  • Nonheritable noise (I think this was random death instead of selective death)
  • Trait linkage and Muller's ratchet (it was unclear how Muller's ratchet was modelled in the software) 
  • Fixation of deleterious mutations (note to non-geneticists - fixation means that the mutation isin every member of the population - it does not mean that the mutation has been fixed from being deleterious).

Sanford said that selection breaks down at the 0.001 fitness reduction level.  At this point, selection is simply unable to remove the trait from the population.  It is also harder to select away recessive genes.

Their software can also simulate a population bottleneck.  It showed that this actually leads to a dramatic loss of fitness because of the rapid fixation of genetic damage.  So, in the fitness graphs, a genetic bottleneck causes a temporary transition from a downhill slope to a downhill cliff.  When the population recovers, it is back on the downhill slope.

Sanford said that this data should cause the following shifts in evolutionary thinking:

  • Switching from Primary Axiom (mutation + selection = increasing fitness over time) to Genetic Entropy concepts
  • Switching from "forward evolution" to "degeneration"
  • Transition from the idea of "creative selection" to focusing almost entirely on "stabilizing selection"
  • Thinking about extinction as a past event to thinking about extinction as a future event

Now, personally, I wonder if some of this relies too much on Darwinian assumptions.  Here are some basic issues:

  • The mutation rate is not really known, it is inferred
  • I have a paper coming out in the Fall CRSQ on "future fitness" explaining some possible reasons nearly-neutral mutations may be occurring for the benefit of the population
  • The rate of beneficial mutations has not been empirically calculated.  One reason evolutionists assume it is so high is that otherwise they have to think about directed mutagenesis
  • Directed mutagenesis throws kinks into most of the areas, and their effects are not modeled by the software

Okay, so I probably need to explain directed mutagenesis a little more and why it impacts their model.

Historically, evolutionary thought has thought of mutations happening essentially haphazardly - that is, without any particular constraint (except perhaps incidental ones) on which DNA bases get modified.  However, what if that assumption is wrong? There are several possibilities:

  • Instead of mutations compounding, many of them are instead cycling.
  • This means that the fixation of a deleterious gene isn't necessarily permanent.  If it is cycling, it may later be replaced again with the original version.
  • If the mutations are cycling, then perhaps they are all beneficial in some circumstance.  Therefore, perhaps at the population bottleneck 50% (just to pull a random number out of the hat) of them switch from deleterious to beneficial. 
This seems, to me, to undermine the concept of Genetic Entropy, or at least greatly reduce its effect.  I don't doubt that some mutations are haphazard, I just wonder if, however, much or most of what we see are instead directed/cyclical mutations.  There certainly is a lot of evidence for this (we've covered this and its mechanisms elsewhere on the blog before - if I get bored I might update this post with links - some examples include SSRs, transposable elements, and Barbara Wright's single-stranded conformation transcriptional mechanism).  Genetic Entropy probably is happening, but probably nowhere near the rate that Sanford's team supposes.

UPDATE: I just looked at my notes again and realized I left off another source of genetic variation which is at odds with Sanford's model - Virus-borne mutations.  These would affect the population in a way which is not birth-dependent.  If viruses are designed to transfer new genes, reconstituted genes, or any other sort of beneficial change, then this could be a source of beneficial mutations which don't rely on the birthing bottleneck.  For those of you who think that viruses are bad, the fact is that we simply don't know what most of them do.  The only ones that we are really sure we know what they do are the disease-causing ones, but they are a minority.

Anyway, it would really be interesting to see the application reworked with some of these concepts in mind.


August 09, 2008

Geology / Determining the Limits of Baramins through Paleontology


In addition to the ICC conference, the BSG conference was held this week, so I'll be covering some of the talks given there throughout the week.

Kurt Wise gave an excellent presentation on one possible criteria for determining the extent to which mammalian (especially ark-based) baramins (a "Baramin" is a Genesis created kind - NOT equivalent to species) have diversified - the Post-Flood Continuity Criteria (PFCC), primarily based on data from the monograph Classification of Mammals, which had abundant data of the geological layers in which different mammalian organisms were found.

Kurt argued that the fossil record at the genus level for mammals is essentially complete, with some specific exceptions.  Therefore, a given baramin should have a fossil record that goes all the way back to the flood, which, for this study, Kurt used the K/T boundary as the flood/post-flood boundary.  Kurt argued from the data of the fossil record that, although we normally equate the baramin with the family as a first-pass approximation, many of these families do not go all the way back to the flood.  However, if we extend this to the superfamily level, we often find extinct families which do go back to the flood.  Kurt argued that these extinct organisms were the ancestors of the modern families of organisms.

Another interesting thing Kurt noted was that in order to go all the way back to the flood, you had to essentially connect all of Ruminantia to be part of the same baramin.  What's even more interesting is that a friend had previously speculated just this very same thing to me on biological grounds (specifically, the uniqueness of the Ruminant stomach, and the fact that most of the other traits can be had by stretching/deforming basic morphologies).  This would mean that cattle, deer, sheep, goats, and giraffes are all in the same baramin.

In any case, it is important (as Kurt emphasized) to keep in mind that baraminology is holistic, not reductionistic, and therefore no one criteria should be adopted for establishing baraminic continuity and discontinuity.  Some of the issues with the post-flood continuity criteria are:

  • Assumes completeness of the mammalian fossil record at the genus level
  • Assumes a definite flood/post-flood boundary (a friend of mine pointed out that this might not necessarily be a single dividing line)
  • Assumes that the flood/post-flood boundary is the same at all locations in the world
I'm sure there are others, but those are the ones I can think of.

August 06, 2008

General / LiveBlogging ICC - Pt 4


Too tired to post tonight, but it turns out my friend Salvador Cordova has posted some notes from the conference.  If you want his information you can see it here and here.

August 05, 2008

General / LiveBlogging ICC - Pt 3


Instead of giving a step-by-step overview of Steve Austin's presentation, I'll just give the highlights.  My fingers couldn't keep up with the typing last night, and I didn't follow all of the geology concepts.

Ths subject was on the "mudflow revolution".  The primary scientific paper he referenced was On the Accumulation of Mud which is summarized at both CMI and Creation-Evolution Headlines.

He also referenced the "Bedform Stability Diagram" which shows how different-sized particles behave underwater in different currents.  A form of the diagram is viewable here (on page 8 & 9), though it is much more complicated than the one he showed on his slide.

So, his points were:

  • planar laminae (i.e. layers which are horizontal) in mudrocks are traditionally thought to be the result of particles falling vertically out of water over long periods of time 
  • It was thought that silt and clay-sized particles would always form cross-lamination
  • It turns out that silt and clay-sized particles in water actually join together to form floccules
  • Floccules have the settling properties of sand-sized particles in currents
  • In the bedform stability diagram, at low currents ripples and dunes are produced, but at high currents planar accumulation can be shown
  • These laminae can accumulate at a rate of several millimeters per second
  • Therefore, the massive amounts of planar laminae in mudrocks can be explained through fast-moving currents.  This has the potential to changing the interpretation of 70% of the rock record.

He also pointed out an amusing story that as a graduate student, in order to get the laminae concept to work in the lab, they had to take mud, clean it, bleach it, and treat it with special chemicals before they could get it to form laminae by the traditionally conceived method :)

He also made several points about Kelvin–Helmholtz instability which went by too fast for me to understand.

He also suggested that Creationists should set up a racetrack flume for experimentations on this model.

August 04, 2008

General / LiveBlogging ICC - Pt 2


[Again, my own comments are in brackets.  The first talk I went to was John Hartnett's "Starlight, Time, and the New Physics."  I didn't get to use my computer during it because it was standing room only.  It doesn't matter too much, because most of it was beyond my current understanding.  Apparently he is coming out with a new book with Carmeli, but he didn't say the title.  Basically, what he was saying was that Newtonian physics occurred in 3 dimension, Einstein added a 4th (time), and Carmeli added a 5th (velocity).   His theory can account for the motion of galaxies and the expansion of the universe without dark matter or dark energy.  However, apparently it has a little more trouble accounting for our own, or perhaps he simply had more trouble explaining how it accounted for our own.  I'll try to read something about it at a later date.]


[The next talk is Russell Humphrey's talk "The Creation of Cosmic Magnetic Fields".  ]


Russ started by congratulating those of us who went to Hartnett's theory and made it through his equations :)

Ridiculously simple idea: God used water to make magnetic fields in the cosmos

Explains magnetic fields of: stars, galaxies, and planets

Hydrogen nuclei have magnetic fields.  They spin slower - make a field 1/1000 of an electron.  But in water, the hydrogen nuclei point any which way, so normally water is not magnetic.  But it can be magnetic if the nuclei line up.

God formed the earth from created water.  2 Peter 3:6 "The earth was formed out of water and by water".  So what would happen if God, when he made the earth, if he used water and lined up the proton spins?  (obviously this was followed by binding the water together into other elements).

If all H-nuclei are aligned it will have a large magnetic field.  

The field would be 7.9 Gauss at the poles (MRI is about 10,000 Gauss - we are curently in 0.5 Gauss field).

Created magnetism depends on mass

Original Magnetic moment = Planet mass * 0.94 * (A-m^2)/kg

Approximation = Original Magnetic moment is approximately equal to the planet's mass in kilograms

Lines of force = magnetic flux (Faraday)

In ordinary water, molecules collide, disorient spins, and start electric current in the water within seconds.  As each spin got out of alignment, it would induce current.

Current at creation = 130 Billion amperes.  Flux would be conserved (he said the reason, I didn't understand it).

Transforming to solid earth conserves magnetic flux and keeps the same mass (so the current mass would be the same as the mass of the original water).

The current runs down. - Starts at 130 Billion Amperes and decays to 6 Billion Amperes.  2,000 year half-life.  Perhaps other things happened during the flood as well.

Half Life =~ (conductivity) * (radius^2)

Created flux decayed fast in smaller planets.  Also depends on material. 

We can deduce from decay rates what the conductivities of the cores are.

Two main groups - gas giants (gas) and terrestrial planets (rocky and iron).

Gas giants have low conductivity and terrestrial planets have high conductivity.  Matches what we know from material science.

Humphreys made several predictions in 1984 in the Creation Research Society Quarterly, "The Creation of Planetary magnetic Fields."  All of his 1984 predictions are NOW FULFILLED.

  1. Uranus has a strong field (Confirmed in 1986 with Voyager 2) (Creation theory: order of 10^24 A-m^, measured 3x10^24 A-m^2)
  2. Neptune has a strong field (Voyager 2) (estimated:order of 10^24; measured 1.5*10^24)
  3. Mars has a strong crustal magnetization (not a strong field now, but original strong magnetic field) (2001 Mars global surveyor)
  4. Mercury's field decays fast (4% / 33yrs)  ( Mariner 10, 1975 and Messenger, July 2008 4.7 to 3.8 - seems to have dropped even faster than prediction (probably another factor) - error bars don't overlap - very good evidence of change, later another probe will be going past many times)

Other things that confirm theory but were not explicitly predicted in 1984:

  1. IO - matches theory - strong field
  2. Ganymede - matches theory - strong field 
  3. Asteroids Gaspra and Braille strongly magnetized
  4. Meteorites were magnetized (0.05 to 1 Gauss) - appears to have been formed in a field comparable to the earth
  5. Sun agrees even better with theory.  Sun is complicated.  Fluids carry the lines of force with them.  Every 11 years when there are few sunspots (sunspot minima) the magnetic field from the sun is very similar to the shape of the earth's field (dipolar).  However, sun has differential rotation.  This distorts the lines of force.  This makes sunspots.  Sunspot maxima, fields are very warped.  After that, new fields cancel out old fields, making sprays out of it.  Last sunspot minimum was at 80% of created flux.  Energy loss is about 0.15% per cycle.  Difficult to say if it matches theory exactly.

Solar system data fits Humphrey's theory, but only with Biblical conditions (6,000 years and original water)!

Stars and Magnetic Fields

 Other stars:

  • Ordinary stars =~ 10 Gauss
  • Magnetic stars =~ 1,000 Gauss
  • White drawfs =~ 10,000,000 Gauss
  • Pulsars =~ 1,000,000,000,000 Guass
  • Magnetars =~ 100,000,000,000,000 - magnetic energy equal to rest mass energy - limit of magnetism that we know of
  • Theory fits star data fairly well - something about flux winding. 

Galaxies and Magnetic Fields

  • Andromeda -
    • 1 to 10 microgauss (measured by twisting of radio waves)
    • Lines follow spiral arms

How did God create galatic magnetic fields?

One scenario -

  • God may have created galaxies as extremely dense water (denser than neutron stars - quark matter). 
  • Field would be 200 Trillion Gauss. 
  • This gives 1 galaxy's worth of flux.  
  • As conducting material expands, it is constrained to expand along lines of flux
  • Plasma jets made straight galaxy arms
  • Jets would have to be 50,000 light-years long to match theory
  • magnetic field embedded in jets
  • God formed the stars out of the plasma
  • Rotation winds up star arms and magnetic field lines.
  • In such a model magnetic fields would wipe themselves out in 1 billion years
  • All spiral galaxies appear to have had 0.3 billion yr of winding (by local clocks of the galaxies themselves - think relativity)
  • Near and far spirals look the same
  • July 17 article in nature - "Strong magnetic fields in normal galaxies at high redshift" - translations - distant galaxies are young (strength 1 to 10 microgauss).  Same as near galaxies.
  • Humphreys believes fields are primordial, not developed.

Universe may be God's biggest magnet

[This is an interesting, if EXTREMELY SPECULATIVE area, which I think was pioneered by the dude who did Creation's Tiny Mysteries (forgot the name).  Anyway, I thought it was interesting, but it should be considered several orders of magnitude more speculative than the rest of the presentation.]

  • Shell of waters above - outside the universe, empty space outside, galaxies inside
  • Shell would have to be more than 24 billion light years in diameter, more than 20 times the mass of all galaxies (based on recent paper in Journal of Creation talking about Pioneer Anomaly).
  • Current shell's magnetic field should be 10^-19 Gauss; all we know is that it is less than 10^-12 Gauss


Magnetic fields show God's handiwork in the heavens.


Why can't we measure magnetic fields of planets but we can measure magnetic fields of stars?  Answer - the magnetic fields of stars are large enough that they produce spectral effects.

Early equation - is there a theoretical basis for the number?  Answer - it's in the book.  Based on lining up protons.

On waters above - would it be directional or detectable?  What would you look to find?  Answer - I would expect some direction, and there's some intriguing astrophysical data that might point to that field, but we can't be sure.  [mentioned some things about favored axes of radio wave spins and other things, but I wasn't paying enough attention - but ultimately the evidence is small]

Didn't hear question - magnetization of meteorites seems to imply that they were part of a larger body which was about earth-sized.

Plasma cosmology question - didn't know much of about plasma cosmology

Hartnett - Whole class of stars called "strange stars" - range from stars made of diamond (pure carbon) right up to quark stars, could probably be added to the graph of dots and it would probably line up.

Hartnett - something about rapidly spinning objects and event horizons

Hartnett -  universe-sized magnetic moment - what about just treating galaxies as single-spin systems, and then add up total amount of galaxies - Audience comment - Harold Aston has done just this thing, but audience member did not know what the conclusion was.

We are not seeing galaxies at creation - we are seeing them after about 300 million years of winding (using their local clocks). 

August 04, 2008

General / LiveBlogging ICC - Pt 1


[Since the conference room does not have WiFi, I'll have to just "pseudo-liveblog" this thing, and then post it when I get back to my room :)  ]

[The first session I'm going to is Kevin Anderson's "A Creationist Perspective of Beneficial Mutations in Bacteria".  All my own comments are in brackets]

Advantages of studying bacteria:

  • Rapid geenration time - generation time as quick as 10 minutes
  • One chromosome
  • Can have a high enough mutation rate
  • Easy to manipulate and study - can deal with them easily in a lab - especiallyE. Coli
  • "simple" phenotype selection - can usually get 100% selection
  • Asexual - daughters are clones - uses binary fission

Significant features of bacterial genome:

  • Uses reverse transcriptase to make RNA/DNA hybrid (msDNA)
  • Contains intron and exons
  • Can splice proteins
  • Has multiple layers of regulation
  • Uses antisense RNA for regulation
  • Communicate with other cells

Many of these features were thought to only exist in higher organisms, but have been found in at least E. Coli.

Mutations maintain diversity through mutation.  Bacteria have ability to intentionally mutate their genome.


  • Horizontal gene transfer
  • Mutation and hyper-mutation
  • Insertion Sequences elements
  • Adaptive Mutations (random?) [no, of course not]

"Beneficial" Mutations

Wild Type + mutant => (a) more fit, (b) less fit, (c) neutral - can be any one of these

Study of E. Coli after 20,000 generations (Lenski)

Lenski 1999 - mutant stains possessed 50% greater "relative" fitness compared to parent (for the given environment).

Schneider et al 200 and Cooper et al 2001 - the beneficial mutants were the result of genetic disruptions (knockouts) - i.e. they were all degenerative

  • Lost different catabolic systems which were not used for prolonged periods of time (Cooper and Lenski 2000)
  • Loss of some of the flagellate genes (interesting, because the wild type didn't have a flagellum to begin with) (Cooper et al 2003, PNAS)
  • Gene disruption via IS element activity (Schneider et al 2000, Genetics 156:477) 
  • Called this Antagonistic Pleiotropy - a sacrifice of a particular existing system that is not essential in a specific environment, if that sacrifice increases adaptation to the specific environment.  Normally temporary and transient.

IS Element activates promoter to provide expression of a gene.

IS Element might also have an active repressor which disrupts it.

spoT mutants -> decreased ppGpp -> increased tRNA and rRNA -> increases protein synthesis - starts with a disruption or reduction of the cell's control mechanism

Mutants were less fit in other environments, such as different temperatures.  

Conclusions of Lenski's long-term adaptation study:

  • Bacteria readily adapt to consistent environment
  • Bacteria eliminate unused genes and systems
  • Mutations  reducing regulatory control can be "beneficial" in a constant environment.
  • Genomic truncation can benefit in constant environment
  • Adjusment of environment from original selection conditioncan render mutants "less fit".

Stress Survival

Increase temperature of E. Coli - get lots of mutants with gene duplications and deletions - genes involved in coping with higher temperature are the ones duplicated!

(Richle et al 2001, PNAS; Richle et al 2003 Physiol. Genom.; Richle et al 2005, Physiol. Biochem. Zool.)

Other studies show that when you return organism back to normal temperatures, duplications are removed [other studies not speicfied]

Antibiotic/antimicrobial resistance Mechanisms:

  • Horizontal Gene Transfer
  • Spontaneous mutations

Common mechanisms

MarA/B System

MarR - represses promoter so that marA and marB are not expressed
Mutant to MarR is repressed, marA becomes a promoter for the promoter region, which increases the system, and forms both marA and marB, which then becomes marAB.

Metroindazole activation - [could not follow this one quickly]

Erythromycin resistance in e. coli - loss of 11 bp segment of 23S RNA

Kanamycin resistance in E. coli [slide up too short to complete]

Anderson 2005 CRSQ has a list of phenotype resistance and genotypes, and shows the degenerative nature of genotype systems.

 Bacterial Response to Starvation

Glucose-limited adaptation - two mutant organisms that work together: [these are two different mutants in different cells I think]

  1. Truncate glucose feedback regulation - increases glucose uptake and acetate production
  2. Reduced regulation of Acetyl CoA synthetase, to survive on increase in acetate 

Hypermutations - impaired repair mechanisms - increases chance of "beneficial" mutations under stress conditions.

[PROBLEM - keeps on banging the "beneficial but degenerative" drum]

Adaptive Mutation

Directed or random? - talked more by Georgia on Wednesday

Cultivation of Lac- in medium with lactose as sole catabolite. Frameshift reversal occurs at a much higher than random rate.

Possible mechanisms:

  • Recombination-dependent
  • Amplification-dependent
  • Hypermutation

Nylon Degradation - Nylon-degrading bacteria identified in 1980s.  Assumed as the evolution of a new metabolic pathway.  Most-commonly studied - Anthrobacter sp. K172.  Ei (NylA) EII (NylB) EIII (NylC) - on plasmid

Carboxyesterase - original version will not metabolize nylon.  EII has an active site has broadened specificity to process Nylon.  Broadening specificity of enzymes is a degenerative process.  Prediction - EI and EIII will be found to be a degenerative (broadening specificity) mutation.  Same prediction for opp protein in his next example for transport proteins.

Several mutations at once, but all degenerative.

Citrate evolution after 30,000 generations. E. Coli in aerobic conditions cannot process citrate.  Lenski has found E. Coli that can process citrate.  NOTE - genetics of this has not been studied - only phenotype.

Perhaps all Enterobacteria are all same created kind. [Interesting!]

citT (citrate transporter) is expressed anaerobically.  If citT is cloned into a shuttle vector (Martinus et al. 1998 - J. Bacteriol), E. Coli can utilize citrate aerobically.  The only thing that needs to be done is activating or derepressing the gene.  The only mutation may be the loss of citT regulation! [Superinteresting!]

Antagonistic Pleiotropy - Analogy of "beneficial" mutations to constructing a house - removing non-supporting walls to create a larger dining room.  Lose a room, but gain a function.  Doesn't explain how the house is constructed.

Creation Model - Rigid Flexibility - flexibility that goes only so far. 

  • fully-formed system
  • trade-off systems
  • bacteria need to adapt rapidly
  • Using fully-formed systems in a trade-off approach

Bacteria can often get back to wild type by recovering systems.

Q&A Time

Pennicilinase - only possible example of antibiotic resistance that has a chance of not being degenerative

Reversion - either genotypically - specific mutation reverts back, or phenotypically - a suppressor and then a repressor mutant.

Why is losing specificity a loss of ability, especially if V(max) of enzyme is not affected?  Metabolism is managed by having very specific, narrow metabolic pathways,  and showing a decrease in specificity will only cause long-term problems if compounded, because metabolics require tight specificity.

In debate, need to force evolution to show why the mechanisms they have examples for can contribute to large-scale evolution.  If the mechanism is deregulation, then it can't be the source for novelty.

No current research on limits of baramins but there probably needs to be.

Why is it called "antagonistic pleiotropy" - seems to not be using "pleiotropy" in the strictest sense, but that's what the evolutionists have called it.

Isn't the reversion an increase in specificity?  Couldn't other mutations increase specificity?  There's no example of it occurring.  [Isn't SMH in immunoglobulins an example of increasing specificity?]

[I think it was a good presentation, but I think he beat the degenerative drum WAY too much.  My BSG presentation should indicate some ways in which organisms could theoretically produce increased specificity non-degeneratively (though there is always a tradeoff somewhere).]

[The second session I'm going to is Bob Hill's "The Tectonics of Venus and Creation"]

Venus as a prototype of Plate Tectonics - this is a review of secular literature in order to get it into the Creation literature.  There are arguments for and against catastrophic plate tectonics in Venus.  This is just the "pro" side in order to get the information into creationary literature.


  • Venus is a terrestrial planet. 
  • For a long time we could only see the atmosphere, but not ground
  • Venera 7 (1970) first successful probe to land on another planet - stopped working after 23 minutes on the surface
  • Sulfuric acid clouds, surface temperature like an oven, Rocks at the surface of the lander.  Perhaps sand
  • Venera 13 (1982) got color photographs - sky is orange
  • Rock: basalt like, much drier
  • Stiffer rheology

From radar, Venus looks a little like the earth, but drained of its oceans

Magellan - radar-mapped 98% of surface

  • Crust thickness !70km
  • Compared gravity anomalies with topography

Geologic Structures on Venus

  • Guinevere Plains - volcanic related 
  • Corona - filled up with magma, then drained and dropped down
  • Volcanic Domes
  • Arachnoids - no one knows what these are
  • Craters (about 900 craters on Venus)
    • Small-diameter craters are not common (probably because of atmosphere - anything smaller than a certain size would burn up in the atmosphere)
    • Crater's to 280 km in diameter
    • 84% do not show any signs of modification
    • Craters are randomly (near-perfect distribution!) distributed on surface
    • As a comparison, Corona is only partially matching the random distribution

Possible Implications

  • All parts of the surface are the same age
  • Low crater count implies relatively young surface 

Venusian interior seems to have core, mantle, and crust.

Tectonic Model

  • Lid Tectonics, not plate tectonics - one giant plate covering the whole surface 
  • Lid will have slower heat transfer to surface than plates, gives increased transfer
  • Increasing temperature lowers viscosity
  • Makes large-scale mantle convection easier
  • Lid underplating
  • Same as initial conditions assumed by Catastrophic Plate Tectonics (Austin et al, 1994)
  • Subduction starts when plate gets 250km thick.
    • Rupture develops somewhere
    • Surface rapidly subducts inside
    • (Secular model says this is 10-20 million years, but is relatively short for planetary assumptions)

Creationary Implications

  • Crater randomness is a problem (since it rotates VERY slowly - months) for:
    • Asteroid swarm model for the flood
    • Exploding planet between Mars and Jupiter [haven't heard of this one!]
  • Consistent with RATE results of rapid radioactive decay with the flood as a possible mechanism to start the flood
    • Ideal for future modeling by TERRA


What are Corona? Not found elsewhere.  Possible explanation (based only on photographs) Two layers of rock.  magma is injected between the two layers, lifting one layer up.  It  later drains and drops back down, leaving circular faulting.

Are there any structures on venus which are analogous to terrestrial structures like subduction zones?  A few proposed, all hotly debated.  The ones found don't look that much like subduction zones.  Need more space probes!

Why would Venus have lid tectonics and the earth plate tectonics?  Really unknown.  

Did Earth have impacts at the same time as Venus?  Probably, but we have weather but Venus doesn't.

Is there evidence from a young-earth timescale of a resurfacing event?  Not much except it matches Catastrophic Plate Tectonics.

Faulkner has suggested two cratering events suggested.

Surface temperature of venus is fairly uniform throughout.  Atmosphere keeps it going.

Uniform randomness is unique to Venus.  What about Mercury?  Currently looks nonrandom, but not enough mapping done.  If Mercury is nonrandom, it indicates that there may in fact have been a recent (during/post-flood) resurfacing event on Venus but not elsewhere. [Very interesting!]

[Very interesting stuff!!! Certainly at the beginning of understanding, but it looks promising]

July 29, 2008

General / Schedule for BSG Posted


The schedule for the BSG conference has been posted.  As you can see, I'll be talking about what Wolfram's complexity classes can tell us about Irreducible Complexity.  I'm sure that Stephen Wolfram would cringe if he knew that's what his research was being used for.  That makes me both happy and sad - happy because it was somewhat providential that I came to know about A New Kind of Science, and sad because I wish that I could repay Stephen for the good he and his company has done my family, and I don't think that he would take too kindly to the direction that I'm taking his work.

July 27, 2008

Information Theory / Metaprogramming as a model for VDJ Rearrangements


Two years ago I gave a talk to the BSG on how the genome acts as a metaprogramming system during VDJ recombination (see R14).  I was wanting to get someone to do additional testing on this, or at least write it up as a proper paper before presenting it on the blog, but since I haven't had the time or resources for either, I figure I'll just go ahead and post it.

A Quick Introduction to Metaprogramming

All computer programmers are inherently lazy - that's why the only thing we are good at is getting computers to do things for us.  In fact, we're so lazy, if we can write a program for the computer to generate code for us, we will.  Such programs - programs to generate programs - are called metaprograms.  If you're interested in the computer programming aspect of metaprograms, I wrote three-part tutorial series on it (Part 1 | Part 2 | Part 3).

But the key parts of metaprogramming are these:

  • The programmer specifies solutions in a domain-specific manner - that is, in a format that is specialized to the task at hand
  • The metaprogramming system then rearranges, rewrites, and otherwise remakes the domain-specific program into a program in the standard language which is to be translated
  • The metaprogramming system is responsible for making the parts of the metaprogram interact correctly

Metaprogramming systems are used to abstract away two things that make programming difficult:

  • Redundant specifications
  • Interaction issues between pieces
  • Mismatches between the problem domain and the solution domain

Now, because metaprogramming systems are doing all of this, it necessarily means that metaprogramming systems are narrow in scope.

A Quick Introduction to V(D)J Recombination

The cell can generate millions or billions of antibodies out of a relatively few number of genes.  It does this by splitting antibodies into four regions - the variable region (V), the diversity region (D), the joining region (J), and the constant region (C).  Each of these regions has multiple genes associated with it, separated by Recombination Signal Sequences (RSSs) and spacers.  Heavy chain antibodies use all four types of regions, while light-chain antibodies just use V, J, and C regions.  The constant regions are (surprise!) constant within an antibody class.

So, when B-cells mature, they pick a single gene from each of the V, D, and J regions, assemble them together, and join them to the constant region of the antibody.  The VDJ regions specify the affinity of the gene towards an antigen (which is why it needs so much diversity), while the C region specifies the attachment to the cell (which is why it remains constant).  

However, when VDJ regions are recombined, an interesting thing happens - a series of non-templated (N) and/or Palindromic (P) elements are inserted between these regions.   Current efforts so far (as of the 2006 paper - I haven't kept up since then) have classified these as "random" insertions.  What can the Creation model offer?

Note -This paper is a good starting point if you want to know more about V(D)J Recombination in general.

V(D)J Recombination as a Metaprogramming System

So, in V(D)J recombination, you have

  • A series of code rearrangements
  • V, D, and J segments all being pulled from a bucket of similar genes and stitched together
  • The whole thing being attached to a constant region
  • All of these parts contributing to a narrow biological focus
  • The rearrangement system is adding in non-templated code in-between the joined-together segments

So, in the metaprogramming model, what is the role of the rearranging system?  It is to not only help the parts of the code go into their correct places and add-in the non-redundant parts, but to also manage the interactions of the parts, so that the recombine properly.

Therefore, if V(D)J recombination is acting as a metaprogramming system, then the probable reason for the addition of N and P elements is to manage the interaction of the parts.  This lets the V, D, and J components evolve more freely without having to worry about how they will interact with the other parts of the recombination system.  The recombination system worries about how the parts will interact.

So, is there any evidence that this is what is going on?  Actually there is.  In certain mouse antibodies, arginine is required at position 96 in order for the antibody to have proper affinity.  Interestingly, this was always generated during recombination in the cases where it was required, even if it wasn't coded for by either of the joined segments!  It appears as though the V(D)J recombination system knows that the arginine is required for affinity, and therefore is able to generate it when necessary.

Now, there are two possible pieces of counter-evidence which I am aware of:

  • Some antibodies recombine in multiple ways using the same template pieces. 
  • Some recombinations are in fact unproductive

However, the first one could simply be because either (a) the recombination system is directional but non-deterministic (i.e. it biases outcomes that are probably workable, but doesn't limit the outcome to a single possibility), or (b) there are additional elements at play, (c) both (a) and (b).

The second one could be the result of a non-deterministic system - that it only biases good results but doesn't guarantee them.

Obviously, this needs experimenting before it is taken as fact, but I think the evidence currently points in this direction.

Other Metaprogramming Possibilities

If the V(D)J recombination system is actually a metaprogramming system, there are some other possibilities worth looking into:

  • There is currently an "unused" region of nucleotides that is a "spacer" between the RSS and the unrecombined genes.  Could that possibly contain metadata about the frequencies and occasions which that gene should be used?
  • Non-homologous end-joining uses a similar recombination method to V(D)J recombination.  Perhaps it also contains heuristics about how the affinities of genes works and how they can be recombined.

Enterprise Metaprogramming and Biology

The V(D)J Recombination system is a fairly standard metaprogramming system.  However, in Computer Science we have another type of metaprogramming facility, called an "enterprise" metaprogram.  In these, the specifications are actually specifications for multiple different subsystems.  That is, a single template is run through multiple metaprogramming systems (one for each subsystem), and it can generate a unified, interacting system.

So, in biology, we would be looking for a system that recombined one way in one tissue, and recombined another way in another tissue, in such a way that variations in those genes would cause the two tissue types to change in coordination with each other.  Alternatively, we might be more likely to find, instead of a recombination system, a mechanism of alternative splicing, so that one gene is spliced in different ways, depending on the tissue, but spliced in such a way that changes within the gene through evolution would cause coordinating changes in the protein products in multiple tissues.  

NOTE - there are several claims here that are unreferenced.  If you are interested in them, mention it in the comments, and I will try to look it up for you.  As I said, it's been two years since I did this research, and it's been pretty much sitting on a shelf since then, so it may take me a few days to find.

July 09, 2008

General / Convergent Evolution - the Platypus


I found this awfully funny.

July 07, 2008

General / ICC and BSG Conference Schedules Online


The schedule to this year's Creation Biology Study Group meeting was just posted.  Register by July 11th to get the early bird discount!

Also, the International Conference on Creationism also has their schedule posted.

Note that both conferences are in the same location on the same week.  ICC is on the first half of the week, and BSG is on the second half.