|
Register | Sign In |
|
QuickSearch
Summations Only | Thread ▼ Details |
Member (Idle past 1653 days) Posts: 20714 From: the other end of the sidewalk Joined: |
|
Thread Info
|
|
|
Author | Topic: How do "novel" features evolve? | |||||||||||||||||||||||||||||||
zaius137 Member (Idle past 3658 days) Posts: 407 Joined: |
Well how broad? I probably couldn't think of more than a dozen IDist/creationists at most putting forward distinct definitions of information. I would suggest that RAZD's criticisms would cover a substantial proportion of them, as would the similar issues I raised.Lee Spetner, Werner Gitt, William Dembski, Durston/Able/Trevors (I'm putting them together since they published papers on this together), Doug Axe, John Sanford, Royal Truman and maybe Ann Gauger. There may be some overlap amongst them as well, 'Complex Specified Information' is pervasive, I haven't done an exhaustive comparison. As you and I both know, an exhaustive review is very difficult I will go along with you on that.
I don’t think it does, you claim 'explaining the persistence of new information is problematical'. In what way does the paper you cited support this? The paper itself concludes that in the artificial system they study new information could arise rapidly contrary to the predictions of creationists/IDists. The mathematics is good to a degree but I do disagree with the conclusion of this author as I show in making my point. I do not support the spontaneous introduction of new information. I maintain very low probabilities do not happen when they exceed any reasonable probability bound.
Because it is one of the metrics which Meyer discussed in 'Signature in the Cell', and which he rejected in favour of 'Complex Specified Information', though I'm not sure if he means quite the same thing by this as Dembski. Here is where you confuse me a bit. I have attended presentations by Myer and do not see a conflict in his exposition of, Specified Complexity. In fact, Dembski’s formulation is actually based on Kolmogorov complexity, Meyer quoted it often. Of course, I could be wrong about Myers complete support on this so for the sake of accuracy please cite that passage you refer. Hence the question about why you mention Kolmogorov complexity. As soon as I figure out how to serve some of these images from maybe a free web server I will start to include the actual formula. Edited by zaius137, : No reason given. Edited by zaius137, : No reason given.
|
|||||||||||||||||||||||||||||||
PaulK Member Posts: 17906 Joined: Member Rating: 7.2 |
quote: Then you fail to understand even the role of specification in Dembski's argument. Low probability events happen all the time. And where in the paper does it require any event below your probability bound ?
quote: No, it isn't. In fact the lower the Kolmogorov complexity, the better the specification.
|
|||||||||||||||||||||||||||||||
zaius137 Member (Idle past 3658 days) Posts: 407 Joined: |
Hi Paulk,
Then you fail to understand even the role of specification in Dembski's argument. Low probability events happen all the time. And where in the paper does it require any event below your probability bound ? About your first part... some parts maybe. Show me where specified events of low probability happen all the time. I believe very small probabilities are not comprehended because they are never encountered in our everyday lives. Here is some simple perspective: What follows may be an oversimplified example (I know it is so don't bother) but the scale is valid. Take an enormous number like 1.0 times 10 to the 415th number of atoms. Place them in a Trader Joes bag, if that were possible, and mark a single atom that is placed with the others. You have a special set of tweezers that you may pick out a single atom from any ware in that bag. This would place a single chance of a correct selection at 1 in 10 to the 415th. But you are allowed to make selections from the bag once every second for 10 to the 25th seconds (a billion times longer than the age of the universe since the big bang). This still leaves you with one last choice from a pool of atoms that consist of still about 10 to 415th atoms (choices were not significantly reduced). Well this to me does not seem likely given there is estimated to be 10 to the 80th atoms in the know universe. The single next selection must take place from a volume of at least 4 times the magnitude of atoms in the entire universe. In a single universe where is that atom to be found? Maybe it is in one of the silicon atoms in that screen in front of you or maybe a hydrogen atom in the Crab Nebula? By the way, I am not a big fan of Dembski However, here are his Premises: Here is Dembski’s Premise. Premise 1: LIFE has occurred.Premise 2: LIFE is specified. Premise 3: If LIFE is due to chance, then LIFE has small probability. Premise 4: Specified events of small probability do not occur by chance. Premise 5: LIFE is not due to regularity. Premise 6: LIFE is due to regularity, chance, or design. Conclusion: LIFE is due to design. "Dembski's proposed test is based on the Kolmogorov complexity of a pattern T that is exhibited by an event E that has occurred. Mathematically, E is a subset of Ω, the pattern T specifies a set of outcomes in Ω and E is a subset of T. Quoting Dembski[16]" The Wiki
No, it isn't. In fact the lower the Kolmogorov complexity, the better the specification. Entropy only specifies the amount of bits required to encode information (Shannon Entropy). Hence, the lower the number (lower entropy) implies more organization; fewer bits are required thus lower entropy. Entropy 101 Edited by zaius137, : Newbe has not mastered basic skill set.
|
|||||||||||||||||||||||||||||||
jar Member Posts: 34140 From: Texas!! Joined: |
Premise one has some support.
Premise two may be true but so far Dembeski has not shown the support for it. Then there is premise three and there all begins to fall apart. And premise four is simply false.Anyone so limited that they can only spell a word one way is severely handicapped!
|
|||||||||||||||||||||||||||||||
PaulK Member Posts: 17906 Joined: Member Rating: 7.2 |
quote: That's a nice bit of equivocation there. As I said low probability events happen all the time - especially when you consider sequences of smaller events (as Dembski does). It's the specification which is vitally important and you cannot ignore it. Now, how about your reason for rejecting the conclusions of the paper ? You claim that it requires an event with a probability below your probability bound. Presumably a specified event. What is it ? What is your probability bound ? And how do you know that it is below your probability bound ?
quote: Kolmogorov complexity isn't the same as entropy, but it is a closely related concept. The more random a sequence, the greater the Kolmogorov complexity.
|
|||||||||||||||||||||||||||||||
Panda Member (Idle past 3961 days) Posts: 2688 From: UK Joined: |
And to add to jar's critique:
zaius137 writes: Premise 5 and 6 are contradictory.
Premise 5: LIFE is not due to regularity.Premise 6: LIFE is due to regularity, chance, or design. zaius137 writes: The conclusion is invalidated just by premise 6. Premise 6: LIFE is due to regularity, chance, or design.Conclusion: LIFE is due to design. At best it would be "Conclusion: some life is due to design." Edited by Panda, : No reason given.Tradition and heritage are all dead people's baggage. Stop carrying it!
|
|||||||||||||||||||||||||||||||
caffeine Member (Idle past 1273 days) Posts: 1800 From: Prague, Czech Republic Joined: |
Premise 5 and 6 are contradictory. The premises are not really contradictory - you should be reading the 'or' as an 'either-or' proposition. It's not meant in the sense 'sometimes it's this, sometimes it that'. It's like the following, valid argument: A: x is either an even or an odd number.B: x is not an even number. Conclusion: x is an odd number. Premise 6 is stating that the only three possibilities are regularity, chance and design. Premise 5 rules out regularity (I'm unsure why we're supposed to take this as given, or even what 'regularity' is supposed to mean in this context). Premises 3 and 4 are supposed to rule out chance (though Premise 3 is uncertain, and 4 is clearly false; unless I don't understand what 'specified' is supposed to mean). Thus, the only possibility remaining is design. The argument's perfectly valid, it's just based on false premises. ---- Expanding on the falseness, can anyone clarify what a 'specified event' is supposed to be in this argument? I'm not sure I understand the descriptions put forward. Events of staggering small probability happen with great regularity, simply because the sum of many probable events put together is a highly improbable event. The idea of specificity I get from the Wikipedia page is a bit vague, but it seems to be that there are not many other, also highly imporbable events that could have happened instead. If we look at the the final configuration of the Premier League, for example, it's staggeringly unlikely - but every other possible configuration would also be staggeringly unlikely, so it's not specified. But, if I've understood this correctly, then life is surely not specified. The number of different configurations in which life can arrange itself successfully can be seen by looking at the world, and the staggering number of ways it already does arrange itself. Different animals use a huge variety of different proteins to do similar or identical tasks, for example. What's specified about that?
|
|||||||||||||||||||||||||||||||
RAZD Member (Idle past 1653 days) Posts: 20714 From: the other end of the sidewalk Joined: |
Hi zaius137, and welcome to the fray.
I believe that a good measure of innate information can be represented by the inference of entropy (Shannon Entropy) ... Are you measuring information or entropy? Does a change in entropy mean a change in information or vice versa? If there is no direct link one to the other then talking about a metric for entropy is not talking about a metric for information ... in which case it is irrelevant to the issue of information, yes? From your link introduction:
quote: This appears to be Shannon information, yes? From your link, in the abstract:
quote: In other words, creating information is easy, yes?
... and the problem of the definition in a biological system of that entropy can be overcome to some degree by the principle of Maximum entropy. The principle of maximum entropy works when little is known about the information in a system. So with "little is known about the information" in the original system or in the altered system then you have not shown any change in information, one way or the other, by using entropy, yes?
... in a biological system of that entropy can be overcome to some degree by the principle of Maximum entropy. Are we talking about entropy as used in physics or are we talking about a different use of the word, and if so what is the definition for it. The (classic physics) entropy in a biological organism can obviously increase and decrease as the organism grows or dies. Does this mean that information also increases and decreases? From your link, again in the abstract:
quote: Because biological organisms are not closed systems the Second Law of Thermodynamics just does not - can not - apply. Plants get increased energy from the sun, herbivores from eating plants and carnivores from eating herbivores. Curiously, a growing child violates the Second Law of Thermodynamics if you ignore this simple fact that pertains to all living organisms.
Note the reference to the Punctuated Equilibrium in the abstract, which to me leaves a dubious source of that information ... Why is that? It is fairly obvious that new beneficial, neutral and deleterious mutations occur in localized populations rather than in the whole species, especially in small isolates that involve the founder effect. Curiously it is even more obvious that improved variations of a species can spread into and take over and dominate the main population when introduced back into the main ecology. See Differential Dispersal Of Introduced Species - An Aspect of Punctuated Equilibrium for some historical examples of just how fast new species can take over the ecologies. From your link, again in the abstract:
quote: This could also just be an artifact of the selection process used in the simulation, condensing the time-line artificially as compared to the effects of selection in the biological systems.
... and the accompanying low probability of very large changes in an organism (new novel features). I chose this paper that favors the low probability opinion verses the opposing view of Spetner. When calculating the probability of forming DNA segments from a string of Deoxynucleotides it becomes apparent that problems explaining the persistence of new information is problematical (pointed out by most Creationists). As pointed out by Wounded King in Message 223 calculations of probability as normally done by creationists are based more on misrepresenting how biological systems work than on math and logic. See the old improbable probability problem for a discussion on probability calculations of this kind.
Now tell me why you think that the information in a genome is not well defined by creationists like Myers when it is clearly in his arguments? You haven't quoted Meyers nor his definition/s, nor is he listed in the references for the paper you cited, making this rather a non-sequitur question. What I see is that when a metric is used the results can be an increase, a decrease or no net change, and I also do not see that this necessarily relates to the evolution of biological systems: biological systems have been observed to form beneficial, neutral and deleterious mutations, and that selection has been observed to the extent that the formation of new traits is not inhibited in any real way. The fact that evolution of new traits is not inhibited to me is proof that information is either easy to increase or irrelevant. Enjoy.by our ability to understand Rebel American Zen Deist ... to learn ... to think ... to live ... to laugh ... to share. Join the effort to solve medical problems, AIDS/HIV, Cancer and more with Team EvC! (click)
|
|||||||||||||||||||||||||||||||
RAZD Member (Idle past 1653 days) Posts: 20714 From: the other end of the sidewalk Joined: |
Hi again zaius137,
Show me where specified events of low probability happen all the time. I believe very small probabilities are not comprehended because they are never encountered in our everyday lives. Any specific mutation from a beforehand prediction standpoint has extremely low probability. That mutations occur has an extremely high probability. This is like comparing the probability of a specific ticket winning the lottery versus the probability that the lottery will be won by one of the tickets issued. This is the main problem of looking at probability after the fact -- it's a post hoc ergo propter hoc fallacy. Mutations occur, some are beneficial, some are neutral and some are deleterious, and then they go through the selection process. That A use is found for a mutation is not a reason for it to have any specific probability, as there are potentially thousands of possibilities for use of any one happenstance mutation. The proper question is what is the probability that any specific mutation will have a use, and that is undeterminable.
By the way, I am not a big fan of Dembski However, here are his Premises: Here is Dembski’s Premise. Premise 1: LIFE has occurred.Premise 2: LIFE is specified. Premise 3: If LIFE is due to chance, then LIFE has small probability. Premise 4: Specified events of small probability do not occur by chance. Premise 5: LIFE is not due to regularity. Premise 6: LIFE is due to regularity, chance, or design. Conclusion: LIFE is due to design. Premise 1 is supported by objective empirical evidence, the rest are asserted assumptions that have not been validated. Premise 4 is invalidated, imho, so the conclusion fails. Premise 3 is debatable, if not just wrong: see Panspermic Pre-Biotic Molecules - Life's Building Blocks (Part I) and Self-Replicating Molecules - Life's Building Blocks (Part II) for objective empirical evidence that indicates to me that the probability of life forming is high.
Entropy only specifies the amount of bits required to encode information (Shannon Entropy). Hence, the lower the number (lower entropy) implies more organization; fewer bits are required thus lower entropy. Entropy 101 Which is irrelevant, as biological systems of reproduction and growth do not need to be efficiently organized, they just need to work, and curiously, this is what we see in the biological world. Unfortunately, this thread is NOT about information or entropy, but about how novel features evolve. Anything not directly related to how novel features evolve is off-topic. You are free to start a topic on these issues if you want. Go to Proposed New Topics to post new topics. This forum tries to limit posts to the specific topic of the opening post, and also cuts posts off at ~300 posts when summary posts are then submitted to close the topic (hence the main impetus to keep on topic). Enjoyby our ability to understand Rebel American Zen Deist ... to learn ... to think ... to live ... to laugh ... to share. Join the effort to solve medical problems, AIDS/HIV, Cancer and more with Team EvC! (click)
|
|||||||||||||||||||||||||||||||
Panda Member (Idle past 3961 days) Posts: 2688 From: UK Joined: |
caffeine writes:
Then Premise 6 contradicts ("rules out") Premise 5? A: x is either an even or an odd number.B: x is not an even number. Conclusion: x is an odd number. Premise 6 is stating that the only three possibilities are regularity, chance and design. Premise 5 rules out regularity To say (e.g.) that "This number is either odd or even, but it is not even." seems contradictory to me. Surely it should just be: "This number is odd." or "LIFE is due to chance or design". caffeine writes:
This seems to be common ground between us.
....it's just based on false premises. caffeine writes:
I don't think that 'specificity' has anything to do with 'specifics'. But, if I've understood this correctly, then life is surely not specified. The number of different configurations in which life can arrange itself successfully can be seen by looking at the world, and the staggering number of ways it already does arrange itself. Different animals use a huge variety of different proteins to do similar or identical tasks, for example. What's specified about that?The word was taken away from its linguistic roots and used to lend support to an argument simply because it 'sounded good'. From what I remember, specificity is the (innate?) ability to know something when you see it. How that relates to 'specified events' is not known to me. Edited by Panda, : No reason given.Tradition and heritage are all dead people's baggage. Stop carrying it!
|
|||||||||||||||||||||||||||||||
zaius137 Member (Idle past 3658 days) Posts: 407 Joined: |
Hi RAZD,
I like your detailed replies and I see a challenge in addressing your thoughtful points. There is a lot to catch up on here and I hope the other participants understand why I cannot get right to the arguments they present although they are just as challenging. I would like to start with one of the posts you presented and cited namely the 6 points you made.
1. The calculation is a mathematical model of reality and not the reality itself. When a model fails to replicate reality it is not reality that is at fault but the mathematical model. When a hurricane prediction program crashes because it can't model the first hurricane in the South Atlantic on record, the meteorologists don't go out to the hurricane and say "you can't be here, our model does not allow you to be here" ... they fix the model by looking for and taking out the failed assumptions (ie - that all hurricanes are north of the equator). When a model fails to model reality it is a good indication that some aspect of reality has been missed in the model. The very mathematical models that scientists use to uphold evolution are the very same principles that evolutionists use. If you claim failure in a general sense of mathematical models, you remove the argument from science.
2. The calculation fails to account for the known pre-existing molecules used in the formation of life that are found throughout the universe, and this failure means the calculation with creation-all-at-once including these molecules is unnecessarily extended downward, starting with too much simplicity. Not all the necessary molecules are present, for instance cytosine is not found in meteorites. The sugar that bonds to the four bases to form the ribonucleotides is very short lived in nature. Many problems exit with the RNA worldview and the SRPs, I hope we can cover them fully. The science has never demonstrated empirically that anything but an all at once approach is possible.
3. The calculation fails to account for the fact that the first life need not be as complicated as a modern cell, that the minimum configuration is much simpler as shown by the LUCA studies. This failure means that the calculation is unnecessarily extended upward, ending with too much complexity. To date the idea of a LUCA has proven an intractable problem in biology. I have just read a paper of statistical verification of the LUCA by Theobald based on the Markovian substitution model. The claims of Theobald that a LUCA is statistically proven are criticized amongst scientists (not many of which are creationists). I have my own unanswered questions about that paper.
4. The calculation fails to account for combinations of groups of such molecules in smorgasbord fashion instead of in assembly line fashion all at once all from nothing. And further, that all those "failed" experiments are still available to be cut and reassembled into new experiments without having to go through all the preliminaries. It fails to account for actual combination process as used in natural assembly of large organic compounds. Amino acids are assembled into larger molecules like peptides and not from extending amino acids by adding atoms. This failure means that all the ways to reach the final necessary combination are not included and thus it unnecessarily excludes possible combination methods. Can a failed experiment be available in a new experiment? I think this statement speculates about the stability of the product. I cannot deny if there is intention to preserve some organic molecules from degradation, then yes the experiment can continue. However, natural chemistry has shown no intent to do so. In fact, equilibrium rules the day in natural chemistry. As far as the spontaneous assembly of amino acids are concerned Millers experiments demonstrate a Chirality problem.
5. The probability of winning a lottery by any one ticket is extremely low, but the probability that the lottery will be won is extremely high. How do you reconcile these two very disparate probabilities? By knowing that any one of the millions of tickets is a valid winner if picked.. Well, in low larger ranges of probability I would agree with you, say 1 in 10^6 or 1 in 10^15. However, probabilities in the range of 1 in 10^1000th are not possible given the acceptance that our universe is limited ( I refer to a universal bound of possibilities). Acceptance of limits, say in calculus are necessary in producing an outcome, even in physics (Plank length, Plank time etc.). I suggest that Dembski’s limit would be acceptable in biology.
6. Finally, the improbability of a thing occurring is not proof of impossibility of it occurring. I can refer you to my objection in point 5 but I think you might benefit by some perspective on the matter. Please comment on my message 228 Please excuse my lack of forum knowledge I am still a Newbe.
|
|||||||||||||||||||||||||||||||
Panda Member (Idle past 3961 days) Posts: 2688 From: UK Joined: |
zaius137 writes:
1 good post is worth 20 crap ones. There is a lot to catch up on here and I hope the other participants understand why I cannot get right to the arguments they present although they are just as challenging. Remember, there is no rush.Even if it takes weeks, there is nothing (other than your own preference) preventing you from answering every post. Forums can run at a slower pace than (e.g.) chat rooms. Tradition and heritage are all dead people's baggage. Stop carrying it!
|
|||||||||||||||||||||||||||||||
RAZD Member (Idle past 1653 days) Posts: 20714 From: the other end of the sidewalk Joined: |
Hi again zaius137,
I would like to start with one of the posts you presented and cited namely the 6 points you made. And the place to make that reply is to Message 1 on the the old improbable probability problem thread. Again, one of the aspects of this forum that makes it a great place to debate these issues is that keeping on topic is very strongly encouraged, to the point that you can get reprimanded for going off topic if it is a persistent behavior. The topic of this thread is How do "novel" features evolve?. I have copied your post to the old improbable probability problem with my reply in Message 45 ... Please comment on my message 228 Please excuse my lack of forum knowledge I am still a Newbe. That's okay, I'm sure you can get the hang of it fairly soon, as you already have a good handle on how the reply and quote functions work, and it is not a difficult learning curve that every 'newbe' goes through. Enjoy.by our ability to understand Rebel American Zen Deist ... to learn ... to think ... to live ... to laugh ... to share. Join the effort to solve medical problems, AIDS/HIV, Cancer and more with Team EvC! (click)
|
|||||||||||||||||||||||||||||||
zaius137 Member (Idle past 3658 days) Posts: 407 Joined: |
RAZD my friend
Are you measuring information or entropy? Does a change in entropy mean a change in information or vice versa? If there is no direct link one to the other then talking about a metric for entropy is not talking about a metric for information ... in which case it is irrelevant to the issue of information, yes? Not at all,Here is the Wiki demonstrating the relationship between entropy and information. In information theory, entropy is a measure of the uncertainty associated with a random variable. In this context, the term usually refers to the Shannon entropy, which quantifies the expected value of the information contained in a message, usually in units such as bits. Entropy (information theory) - Wikipedia
In other words, creating information is easy, yes? Can you walk me threw how exactly you can draw a conclusion about the ease of creating information from that citation?
So with "little is known about the information" in the original system or in the altered system then you have not shown any change in information, one way or the other, by using entropy, yes? I think you are missing an important point here. By quantifying the entropy of DNA (by the principle of maximum entropy), you are exposing it to statistical methodology without the necessity of particular knowledge of its function. Correct me if I am wrong but Delta entropy does not enter into the evaluation. This paper might be able to clarify what is going on here (it helped me) Here we focus on estimating entropy from small-sample data, with applicationsin genomics and gene network inference in mind (Margolin et al., 2006; Meyer et al.,2007). http://arxiv.org/pdf/0811.3579.pdf
Are we talking about entropy as used in physics or are we talking about a different use of the word, and if so what is the definition for it. The (classic physics) entropy in a biological organism can obviously increase and decrease as the organism grows or dies. Does this mean that information also increases and decreases? From that question, I believe you might be on the wrong track.Specifically entropy as defined by Shannon in information theory. H(0)= -P(0)log(2)P(0)-P(1)log(2)P(1)
... The transition is rapid, demonstrating that information gain can occur by punctuated equilibrium.
OK
This could also just be an artifact of the selection process used in the simulation, condensing the time-line artificially as compared to the effects of selection in the biological systems.The fact that evolution of new traits is not inhibited to me is proof that information is either easy to increase or irrelevant.
A question have we observed new traits evolve?Have new unique gene sequences ever been observed to arise spontaneously in genomes? Are changes in an organism only because of adaptation (microevolution)? What is the molecular mechanism for evolution? I would like to review two claims about new innovative functions that supposedly evolved, one to E. coli and the other to a strain of Flavobacterium (nylon eating bacteria). I maintain that these in no way indicate anything but microevolution. E. coli In the case of E. coli adapting to metabolize citrate, that function has been present all along in E. coli and is not innovative. Under certain conditions E. coli (low oxygen conditions) can utilize citrate. Lenski’s 20 year experiment with E. coli is only demonstrating an adaptation by E. coli. Previous research has shown that wild-type E. coli can utilize citrate when oxygen levels are low. Under these conditions, citrate is taken into the cell and used in a fermentation pathway. The gene (citT) in E. coli is believed to encode a citrate transporter (a protein which transports citrate into the cell). Klaas Pos, et al., The Escherichia coil Citrate Carrier citT: A Member of a Novel Eubacterial Transporter Family Related to the 2-oxoglutarate/malate Translocator from Spinach Chloroplasts, Journal of Bacteriology 180 no. 16 (1998): 4160—4165. Thus, wild-type E. coli already have the ability to transport citrate into the cell and utilize itso much for the idea of a major innovation and evolution.
A Poke in the Eye?
| Answers in Genesis
Nylonase Nylon eating bacteria is just another case of programmed adaptation. There are six open reading frames in DNA that code for proteins. The proposed mechanism was a single point mutation and a supposed gene duplication event that triggered an open reading frame shift. The entire process was restricted to the very mechanisms that allow adaptation in bacteria for different food sources. This discovery led geneticist Susumu Ohno to speculate that the gene for one of the enzymes, 6-aminohexanoic acid hydrolase, had come about from the combination of a gene duplication event with a frame shift mutation. Ohno suggested that many unique new genes have evolved this way. Nylon-eating bacteria - Wikipedia Thus, contrary to Miller, the nylonase enzyme seems pre-designed in the sense that the original DNA sequence was preadapted for frame-shift mutations to occur without destroying the protein-coding potential of the original gene. Indeed, this protein sequence seems designed to be specifically adaptable to novel functions. Why Scientists Should NOT Dismiss Intelligent Design – Uncommon Descent In past research, there is supporting evidence for the suggestion that these open reading frame segments and existing gene duplication events are the main mechanisms for new functionality. The mechanism of gene duplication as the means to acquire new genes with previously nonexistent functions is inherently self limiting in that the function possessed by a new protein, in reality, is but a mere variation of the preexisted theme. Birth of a unique enzyme from an alternative reading frame of the preexisted, internally repetitious coding sequence. - PMC http://www.ncbi.nlm.nih.gov/...345072/pdf/pnas00609-0153.pdf Dr. Jim Shapiro, Chicago, Natural Genetic Engineering -- the Toolbox for Evolution: Prokaryotes My question to the evolutionist if no new spontaneous segments of genes arise in genomes how are species gaining unique sequences of DNA. By unique I am not referring to genome duplications. Evolution's mutation mechanism does not explain how growth of a genome is possible. How can point mutations create new chromosomes or lengthen a strand of DNA? It is interesting to note that, in all of the selective breeding in dogs, there has been no change to the basic dog genome. All breeds of dog can still mate with one another. People have not seen any increase in dog's DNA, but have simply selected different genes from the existing dog gene pool to create the different breeds. Question 1: How Does Evolution Add Information? - How Evolution Works | HowStuffWorks
|
|||||||||||||||||||||||||||||||
Dr Adequate Member Posts: 16113 Joined: |
Not at all,Here is the Wiki demonstrating the relationship between entropy and information. [...] Can you walk me threw how exactly you can draw a conclusion about the ease of creating information from that citation? Well, if your choice is Shannon entropy, then creating information is easy. Any insertion would do it, since the insertion increases the number of bits in the genome, and since the content of these bits is not completely predictable from their context. So, for example, consider a "toy genome" (real genomes are of course longer) of the form GTACT_ACTCTA, where the _ represents a base that has just been added by insertion, the identity of which I am not going to tell you. Can you deduce with complete certainty what base is represented by the _, based only on the knowledge that it is preceded by GTACT and followed by ACTCTA? Of course not. Therefore, it makes a non-zero contribution to the total Shannon entropy of the genome. See, it is easy.
Are changes in an organism only because of adaptation (microevolution)? Sentences like this usually betoken a fundamental misunderstanding of either the theory of evolution, the meaning of the word "adaptation", or both --- but you do not make the error explicit, you just take is as read. Perhaps you could explain why in the name of all that is asinine you wrote "adaptation (microevolution)", and then when we've cleared that up we could go on to discuss something else, such as E. coli and nylonase. But this hardly seems worthwhile if you're still confused about really basic concepts.
|
|
|
Do Nothing Button
Copyright 2001-2023 by EvC Forum, All Rights Reserved
Version 4.2
Innovative software from Qwixotic © 2024