|
Register | Sign In |
|
QuickSearch
Thread ▼ Details |
Junior Member (Idle past 5875 days) Posts: 27 From: Oklahoma City, Ok Joined: |
|
Thread Info
|
|
|
Author | Topic: Irreducible Complexity and TalkOrigins | |||||||||||||||||||||||
subbie Member (Idle past 1285 days) Posts: 3509 Joined: |
You know, the funny thing is that this is perhaps the one cdesign proponentist argument that I've ever heard that actually has a germ of a real idea behind it. One way that the ToE can be falsified (perhaps the only way) is to show that an organism or a feature of an organism, could not have arisen through a series of slight modifications. IC is a cdesign proponentist's idea of how to find such an organism or feature.
The failure of IC as a scientific theory is not that it's trying to disprove the ToE. Trying to disprove any scientific theory is the goal of science. No, the problem is that in the description and application of the theory of IC, Behe (probably knowingly) ignores well established and well understood methods in the process of evolution. The much more interesting topic to discuss is, if someone is going to try to falsify the ToE by establishing the inability of evolution to produce a given organism or feature, how can they ever possibly get around the argument that the attempt is an argument from ignorance? Imagine that a real scientist has devised a scheme for determining that an organism could not have arisen from a series of slight modifications of previous organisms. How can such a theory ever be proven by what Ray might call "positive evidence," rather than being, in concept anyway, a position of simply pointing out a lack of understanding how something could have evolved? At least at first blush, it occurs to me that the ToE might be susceptible to a charge of non-falsifiability if any legitimate attempt to falsify it can be dismissed as nothing more substantial than an argument from ignorance. Those who would sacrifice an essential liberty for a temporary security will lose both, and deserve neither. -- Benjamin Franklin We see monsters where science shows us windmills. -- Phat
|
|||||||||||||||||||||||
Wounded King Member Posts: 4149 From: Cincinnati, Ohio, USA Joined: |
The closest I can see to your definition is Hartley who formulates information as LogSN where S is the number of possible symbols and N the number of symbols in the message, which sounds similar to what Dr. A was calculating. Again as with Spetner's description this seems to be more to do with the information capacity of the system than information in a particular message/sequence so again there would be no change between the two sequences as they are of the same length.
Perhaps what Percy was talking about was Shannon Information entropy, i.e. the degree of uncertainty as to the next conveyed letter in a message, which should certainly be maximal in a truly random sequence? The paper must be behind a paywall that I have institutional access through, sorry about that. There is a more recent paper which I think is available without a subscription and which describes the same measurements of Shannon Information/Divergence (Chen et al., 2005). The authors link increasing information in the message to decreasing uncertainty perhaps the reason this is the converse of Percy's description in the other thread is that Percy was describing the transmission of the message over a channel, as Shannon did, and consequently you will get more information per bit about a random message than about one with built in redundancy as the redundancy means you get some of the same information twice. Dr. A's calculation of 2 bits of information for every letter is true when each letter is equally likely but when the probabilities are uneven it may take less. Shannon's original paper (p.18) discusses a case where a sequence consists of 4 different symbols with differing probabilities and calculated that the bits required to encode the message would be 7/4*(N) where N is the length of the message so if we were to apply Shannon's probabilities to the sequence length you gave us, positing A as the most frequent symbol, it would suggest that the message could be encoded in only 28 bits on average. In fact using the sort of frequencies Shannon posits and his subsequent encoding you would be able to transmit the 16(A) message using only 16 bits, and potentially the 15(A)1(T) message with 17 bits. Using the observed frequencies of bases we could work out a theoretical average bit requirement for transmitting an arbitrary length of genetic sequence, by we I mean someone better at maths than I am. Similarly this approach means that there are clearly some possible messages which may require considerably more than the average number of bits to convey, i.e. a sequence made up of low frequency bases. What Cheng, et al. do is rather than look at base frequencies to look at the frequencies of groups of bases or 'words' of varying length, as Shannon described in his section on artificial languages. TTFN, WK P.S. Please bear in mind that I am not any sort of mathematician, so my comprehension of these things is not informed by a deep understanding of the maths involved but principally the discursive text of the papers.
|
|||||||||||||||||||||||
NosyNed Member Posts: 9004 From: Canada Joined: |
and potentially the 15(A)1(T) message with 17 bits. Why only 1 more bit? The position of the T is important not just it's presence anywhere in the string.
|
|||||||||||||||||||||||
Wounded King Member Posts: 4149 From: Cincinnati, Ohio, USA Joined: |
In the Shannon example the encodings for 4 different symbols with frequencies of (1/2,1/4,1/8,1/8) respectively were 0,10,110,111.
If we let A=0 and T=10 then your initial sequence can be encoded as 0000000000000000 and the second as 00000001000000000, this incorporates the position of the T. Conversely a sequence of 16(G) would take 48 bits to convey with this encoding. If the symbols are all equiprobable the relevant encodings would be 00,01,10,11 and it would require 32 bits whatever the sequence was. TTFN, WK Edited by Wounded King, : No reason given.
|
|||||||||||||||||||||||
Dr Adequate Member (Idle past 315 days) Posts: 16113 Joined: |
I believe you have to take the logn of the minimum number of bits required to transmit the stream not a redundant set of bits. But that would be relative to a particular description language, and we'd be back to Kolmogorov complexity.
|
|||||||||||||||||||||||
Percy Member Posts: 22508 From: New Hampshire Joined: Member Rating: 5.4 |
Hi Nosy,
I think WK or DA have provided correct information (except I can't comment on the papers WK referenced because I haven't looked at them), but I'm wondering if they're actually answering your question. What you asked back in Message 20 was this:
NosyNed in Message 20 writes: So is there or is there not a specification for the Shannon info in AAAAAAAAAAAAAAAA and AAAAAAATAAAAAAAA? This question can be answered by providing a little more context, and I'll get to that in a minute. Beforehand I should note that I think WK is correct to say that I tend to think of Shannon information in terms of the number of bits required to communicate information, and this is the same way you're trying to think about it, and what's more, in a prior message you actually provided example binary encodings for the specific DNA base sequences of AAAAAAAAAAAAAAAA and AAAAAAATAAAAAAAA. Fundamental to the concept of Shannon information is that both sender and receiver must agree upon the set of messages to be sent. WK and DA are defining the message set to be {C,A,G,T}. The size of the message set is 4, and assuming equal probability for each base type, the number of bits to communicate a single message from the set of possible messages is log24, which is 2 bits. So to communicate 16 messages (i.e., 16 base types) of 2 bits each would require a total of 32 bits. This also means that both your example sequences, AAAAAAAAAAAAAAAA and AAAAAAATAAAAAAAA, require 32 bits of information to communicate from sender to receiver. But what if those sequences are the only messages in your message set? In other words, what if your message set were {AAAAAAAAAAAAAAAA, AAAAAAATAAAAAAAA}? The size of your message set is 2, and so the number of bits required to send one message from this message set is log22, which is 1 bit. Of course, as I think both WK and DA have explained, in the general case there aren't really only two messages in your message set. If any of the 16 bases can be any of {C,A,G,T}, then the size of your message set is actually 416 = 232, and of course log2(232) is 32 bits. But I don't think the general case was what you were asking about. What I think you were asking is, "What if a gene, say AAAAAAAAAAAAAAAA, were to experience a single point mutation and change to AAAAAAATAAAAAAAA?" In order to answer this question we have to ask whether AAAAAAAAAAAAAAAA was the only allele of this gene. In other words, was the messages set of this gene just {AAAAAAAAAAAAAAAA}? If so, then the amount of Shannon information necessary to communicate this information from one generation to the next is log21, which is 0 bits. Incredible but true, and it's because with Shannon information it is assumed that sender and receiver know the message set. But let's assume a larger message set, with a smaller number of bases so I don't have to type as much. Let's say our message set is {AAAA, AAAC, AAAG}. The size of our message set is 3, and so it takes log23 to communicate one message of a three message set, which is 1.585 bits (Shannon information only provides the smallest number of bits necessary, it doesn't tell you what the encoding is, that's a whole other problem). Now let's say a new allele joins the other three to expand the message set to {AAAA, AAAC, AAAG, AAAT}. The size of our message set is now 4, and so it takes log24 = 2 bits to communicate one of these messages to the next generation. The amount of information in this gene for our population's genome has just risen from 1.585 to 2 bits, an increase of .415 bits. --Percy
|
|||||||||||||||||||||||
NosyNed Member Posts: 9004 From: Canada Joined: |
Thanks Percy. I finally think I get it. It is more complex (pardon the intended pun) than I realized.
Now can someone try again to explain how a point mutation always reduces the information? Edited by NosyNed, : asked for more
|
|||||||||||||||||||||||
Dr Adequate Member (Idle past 315 days) Posts: 16113 Joined: |
Now can someone try again to explain how a point mutation always reduces the information? Well, as your tone suggests, it doesn't. However you define and quantify information, if some point mutation reduced the information, then the opposite point mutation would of course increase it. Every mutation has its opposite. An insertion can be undone by a deletion, a point mutation by a point mutation, an inversion by an inversion, a frame shift by a frame shift, and so on. And however you choose to measure information, if one mutation reduces the information in the genome, the opposite mutation must increase it. Otherwise you could have two identical DNA sequences containing different amounts of information, which would be nonsense. Edited by Dr Adequate, : No reason given. Edited by Dr Adequate, : No reason given.
|
|||||||||||||||||||||||
molbiogirl Member (Idle past 2672 days) Posts: 1909 From: MO Joined: |
Wow, Dr. A.
I've never heard it put that way before. Bravissimo!
|
|||||||||||||||||||||||
Dr Adequate Member (Idle past 315 days) Posts: 16113 Joined: |
I've never heard it put that way before. That's because I thought up this line of argument myself. * looks smug *
|
|||||||||||||||||||||||
Antioch's Fire Junior Member (Idle past 5994 days) Posts: 12 Joined: |
"We have observed this in relatively few generations."
I scanned that article you cited and I checked out some of their references but didn't come up with anything. In fact, it seems to me that it is exactly the opposite. Correct me if I'm wrong, but I believe we have breeded the heck out of dogs. These dogs still have the ability to breed together don't they? It may be awkward with a chiuaua and a Dane but their genetics still match up okay...
|
|||||||||||||||||||||||
Wounded King Member Posts: 4149 From: Cincinnati, Ohio, USA Joined: |
I'm not sure why you think dogs are a particularly relevant counter evidence to the idea that reproductive isolation can arise over the course of 'relatively few' generations.
No one has suggested that simply being bred, or even selectively bred, for several generations should give rise to new species. Now if there had been a concerted effort to breed dogs which were reproductively isolated then you might have a point. In the same way that all the people who go on about how 100 years of Drosophila mutational experiments have failed to produce a new species would have a point if that was ever what those experiments were intended to do. I'm not sure what the failure for reproductive isolation to be esablished as an unintentional side effect of human breeding of domestic dogs from grey wolves is supposed to demonstrate. In terms of genetic incompatibility grey wolves and dogs are still the same species. TTFN, WK
|
|||||||||||||||||||||||
bluescat48 Member (Idle past 4220 days) Posts: 2347 From: United States Joined: |
I'm not sure what the failure for reproductive isolation to be esablished as an unintentional side effect of human breeding of domestic dogs from grey wolves is supposed to demonstrate. In terms of genetic incompatibility grey wolves and dogs are still the same species. That might also go for coyotes & dingos. That seem to be able to breed with domestic dogs. Which would simply mean that they are simply varieties of the same species, Canis lupus
|
|||||||||||||||||||||||
Fosdick  Suspended Member (Idle past 5531 days) Posts: 1793 From: Upper Slobovia Joined: |
WK, thanks for directing me to this thread. Good discussion here. Answers some of my questions. Although I'm not sure that all of the Shannon principles have been adressed, such as capacity, average mutual information, ascendency, and redundancy. One case for redundancy would be that a codon can specify an amino acid in any of several configurations (e.g., GCU, GCC, GCA, and GCG all specify the same amino acid, alanine). A SNP mutation on the third nucleotide shouldn't matter to the resulting protein. Is this because the protective redundany of genetic information has been adapted to buffer against SNPs to moderate change?
”HM
|
|||||||||||||||||||||||
TheWay Junior Member (Idle past 5875 days) Posts: 27 From: Oklahoma City, Ok Joined: |
Hi Everyone, Great discussion.
However you define and quantify information, if some point mutation reduced the information, then the opposite point mutation would of course increase it. What would constitute an opposite point mutation? Also, by this logic we could assume that no information would ultimately be removed or added. And that the genome operates under a equilibrium type state? Or is there another type of mutation that can add up information? Also, is adding up information relevant to the ToE? Spetner's main idea, IMO, is that complexity as we see in various organisms have required information that the idea of the ToE cannot supply through mutations. Complexity requires complex information, which has not yet been shown to have been accumulated through natural processes, as the NDT had imagined.
And however you choose to measure information, if one mutation reduces the information in the genome, the opposite mutation must increase it. Otherwise you could have two identical DNA sequences containing different amounts of information, which would be nonsense. Could you elaborate? Or supply some material, I can't find anything. "Sometimes one pays most for the things one gets for nothing." --Albert Einstein
|
|
|
Do Nothing Button
Copyright 2001-2023 by EvC Forum, All Rights Reserved
Version 4.2
Innovative software from Qwixotic © 2024