I hope you'll agree that I was right to suggest a thread specifically for your front loading hypothesis. You may not like all the reactions, but you're certainly getting some!
The genetic code is highly optimized for error minimization (Freeland et al., 2000). This optimal genetic code is nearly universal across all taxa.
There's an interesting thing about this. While our standard code has certainly undergone selection for error minimization, and seems to be close to its local peak on the fitness landscape, there are plenty of higher fitness peaks elsewhere which nature could have hit if it had happened across a different random code to start with.
Looking at this from the I.D. perspective, it seems like bad news. Our standard code seems a very unlikely choice for your rationally designing frontloaders if they were aiming at error minimization.
As for your main point about it, I see no reason why any very early more error prone versions should have survived alongside a prokaryote LUCA, and I don't really see the point of your analogy with non-flagellar functional homologies, as these are nothing to do with sub-optimal flagella.
To use front-loading as a working hypothesis, it is assumed that multicellularity was an objective of the front-loading designers, as well as the origin of animals and plants.
Is the suggestion that the front loaders might be able to predict something like the chance endosymbiotic event that seems to have enabled the evolution of eukaryotes? How could they front-load that? And wouldn't they have to have a very clear idea of the future orbit and physical evolution of the planet itself, not to mention the behaviour of the local star which could radically effect things?
I can see many other problems as well, but we've got to start somewhere!
Congratulations on setting out the basic front-loading idea very clearly in your O.P.
I found an interesting comment from Ken Miller on this. He suggests that if front-loading were real, we would see the exact opposite of what Genomicus suggests; we would see enormous mutation rates on any front-loading sequences. They would be inactive and thus unchecked by natural selection. Those portion of the genome would be subject to runaway mutation.
Gemonicus is aware of that problem, which is why he describes the genes in prokaryotes intended for later eukaryotes as having function in the prokaryotes and being conserved.
Yes, but the actual results of that would be indistinguishable from a sequence that was simply highly conserved because it was useful. Even where the system served different functions in eukaryotes and prokaryotes, it would still be indistinguishable from conservation with exaptation. We're just left with another situation where the predictions of front-loading are exactly the same as what we would expect from regular evolution.
Of course. That's why I said on the thread (silly design) where I suggested this one that I'd be interested in seeing predictions that weren't either the same as those of evolutionary theory, or in keeping with it.
I'll take a look at that paper, but there are some things that need emphasizing. That the standard genetic code is not at a global optimum for error minimization really isn't bad news from an ID perspective. This is because there are other functions aside from error minimization that would be optimized, and this is indeed the case in the genetic code, as highlighted by Bollenbach et al., 2007:
Yes, that's the answer I was expecting, but it was optimization for error minimization that you mention in the O.P. In fact, the paper suggests that the standard code is frozen some way off even its local fitness peak so far as error correction is concerned. However, the other improved error minimization peaks are so numerous that it would seem unlikely that there aren't those that would achieve better all round balanced function (or more rational design) than the one we've got. The designers had a lot of choice.
Full optimization of one function may significantly reduce the optimization of another. Thus, a balance would have to be made between various functions.
The genetic code is, nevertheless, at a local optimum for error minimization.
More likely at a local balanced optimum because of the other functions you mentioned, and frozen there (with the few known exceptions) because of the difficulty of traversing valleys to reach other high points (which the front-loaders could have done).
And the absence of a phylogenetic tree like I describe in my essay, and the fact that this highly optimized genetic code is nearly universal, points to front-loading.
It fits the scenario of an initial random functional code evolving to become optimized, and hitting a local peak on the fitness landscape fairly quickly then getting frozen. The paper suggests that random codes hit their local optimums easily and quickly.
The prokaryote LUCA could easily have had a thoroughly sub-optimal genetic code. This would, in turn, evolve and be fine-tuned, but many detours and by-ways would be explored, with some less optimal genetic codes branching off, producing a phylogenetic tree of genetic codes. There is no reason why this should not have occurred, under the non-telic model.
The quick and easy arrival at a local balanced optimum would be one very plausible reason, don't you think?
If I'm getting it right, the variations that we do see from the standard code should be rare examples of it traversing slight valleys on the fitness landscape, and hitting other local peaks. They may not be sub-optimal for their circumstances in the balanced sense, even if they are for error correction or other single functions (just like the standard).
Their survival might indicate that a rational seeming option of seeding the planet with organisms containing lots of different codes could have left a number of them with much more radical differences. Different non-local fitness peaks. But we don't see this.
There's one thing that I will say about our standard code. It will be exactly right for doing what it has done on this particular planet regardless of its objective efficiency, and that's a prediction of both a non-telic view of evolution and a pure front-loading telic view. "Side loading" I.D. views do not predict this, and their claim is that what we see happening on the molecular level is inadequate to produce the variety and complexity of life that we see around us, and therefore the constant intervention of intelligent design is required.
There's quite a gulf between your view and that of the side-loaders.
The point about the flagellum is that it is predicted by Darwinian evolution that we should find functional pre-cursors, if it did indeed evolve.
What's predicted is that they should have existed, not that they will necessarily still be around. Like semi-aquatic whale ancestors, for example, or apes with brain sizes half-way between ours and the other extant species, or elephant ancestors with noses three feet long.
The same logic holds for the genetic code: we should find sub-optimal pre-cursors.
Not should, but hypothetically could. But if the pathway to the local fitness peak is easy, and getting off it hard, it seems unlikely.
Well, for starters, that's assuming that the endosymbiotic event that gave rise to the eukaryotes wasn't planned. The question "how could they front-load that?" is a valid one, but keep in mind that the human race has very little experience in the field of front-loading biological states. I think the answer to this question could be solved if we really thought about it. My personal opinion, of course.
I'll try thinking, but it beats me at the moment. It would seem to be part of the general plan, because you described eukaryotes, plants and animals as the objectives. With the plants, there's another such event required as well.
Well, the way I see it is that these front-loaders would have seeded many planets with these life forms. On some planets, these life forms may have gone extinct. Also, convergent evolution at the molecular level seems to indicate that it wouldn't be that terribly difficult to front-load future biological states - the behavior of our local star notwithstanding.
Don't you think that varying the genetic code around lots of different fitness peaks would help in increasing the chances of life taking? Also, if they wanted eukaryotic cells, they could start off with them in the mix, which would seem much more like rational design than relying on an endosymbiotic event.
As Bluejay predicted, I am effectively getting swamped with responses. So is there anyone who would especially like me to respond to their points? I do owe bluegenes a response, though, so responding to his post is priority.
No hurry for me Genomicus, and I know I'm busy for the next couple of days anyway. The general point that I made on the other thread and that others are making here might be a good priority, as Taq suggests above. That is, the predictions that would differ from those of evolutionary theory. As you know, I don't think the standard code is any use to you at all in this respect, but I expect you'll disagree, and I'm sure you have other claims as well.
"The extreme optimization of the genetic code therefore strongly supports the idea that the genetic code evolved from a communal state of life prior to the last universal common ancestor."
Butler seems to agree with the (non-telic) hypothesis put forward by Carl Woese. That is that the standard code may have evolved in an early epoch of communal life prior to the LUCA in which HGT played an important role. That's certainly possible. What we can say at the moment is that we know very little about how it evolved. That, I'm sure you understand, is not a reason to stick unparsimonious intelligent designers into the gaps in our knowledge.
An extremely optimized genetic code, like the standard genetic code, wouldn't seem to be terribly advantageous to unicellular organisms, in contrast to less optimized genetic codes. Radical substitutions would be far more likely to be non-deleterious in unicellular organisms than in complex, multi-cellular organisms. In fact, a less optimized code might be more advantageous for unicellular organisms, in one sense: it would accelerate the rate of protein evolution.
That's rather vague. Why do unicellular organisms need to evolve at a faster rate? To what or whose objective? It would only be our speculative front-loaders who might have an objective. If so, why didn't they give prokaryotes "sub-optimal" codes? Why not give them lots of different ones in order to increase their chances of reaching their objective rapidly?
Also, I don't understand these front-loaders. If the objective of the seeding of a planet is plants and animals, then it would make sense to start off with eukaryotes (or some equivalent) somewhere in the original mix, as I've suggested before. It would make sense to engineer something like the mitochondria into some cells, because such a thing would be required to "power" the more complex multi-cellular life forms.
Seeding a planet only with cells that do not have such a thing means relying on a chance endosymbiotic event to produce it. If plants and animals were the objective, why not maximise the probability of getting them?