Register | Sign In


Understanding through Discussion


EvC Forum active members: 65 (9162 total)
2 online now:
Newest Member: popoi
Post Volume: Total: 915,815 Year: 3,072/9,624 Month: 917/1,588 Week: 100/223 Day: 11/17 Hour: 0/0


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   How do "novel" features evolve?
zaius137
Member (Idle past 3409 days)
Posts: 407
Joined: 05-08-2012


Message 222 of 314 (661570)
05-08-2012 3:14 AM
Reply to: Message 7 by RAZD
03-09-2012 9:03 PM


Re: creating "information" is either easy or irrelevant
Hi RAZD
I apologize for the late entry on this But I would like to disagree with one of you assertions you made early on in this post.
Now the problem with the creationist\IDologist claim about information is that they don't define what the concept means or even more importantly, how it can be measured. There is, however, some evidence that we can look at which shows that either the concept "nature cannot create the information" is either falsified or irrelevant.
Your Quote
I believe that a good measure of innate information can be represented by the inference of entropy (Shannon Entropy) and the problem of the definition in a biological system of that entropy can be overcome to some degree by the principle of Maximum entropy. The principle of maximum entropy works when little is known about the information in a system. Here is a paper describing gain in information Note the reference to the Punctuated Equilibrium in the abstract, which to me leaves a dubious source of that information and the accompanying low probability of very large changes in an organism (new novel features). I chose this paper that favors the low probability opinion verses the opposing view of Spetner. When calculating the probability of forming DNA segments from a string of Deoxynucleotides it becomes apparent that problems explaining the persistence of new information is problematical (pointed out by most Creationists).
The paper on information in the DNAhttp://www-lmmb.ncifcrf.gov/~toms/paper/ev/ev.pdf
Now tell me why you think that the information in a genome is not well defined by creationists like Myers when it is clearly in his arguments?
Edited by zaius137, : No reason given.

This message is a reply to:
 Message 7 by RAZD, posted 03-09-2012 9:03 PM RAZD has replied

Replies to this message:
 Message 223 by Wounded King, posted 05-08-2012 5:01 AM zaius137 has replied
 Message 233 by RAZD, posted 05-09-2012 8:24 AM zaius137 has replied

  
zaius137
Member (Idle past 3409 days)
Posts: 407
Joined: 05-08-2012


Message 224 of 314 (661601)
05-08-2012 1:31 PM
Reply to: Message 223 by Wounded King
05-08-2012 5:01 AM


Re: creating "information" is either easy or irrelevant
Hi Wounded King
RAZD is by no means saying that there aren't usable metrics for measuring information in the Genome, what he is saying is that creationists/IDists don't use these metrics but instead prefer their own peculiar variations which are rarely if ever actually usable. In the few examples where a metric can be applied, such as Spetner's metrics or Durston et al.'s functional information, to get a value there is little way to relate it meaningfully to any actual biological function or system in order to look at changes in that system.
Is it necessary for you to define what RAZD implied, I found his statement rather broad in scope. You seem to know a great deal of the Creationist arguments. Do you care to define metric or variation that concludes rarely if actually usable.
I know I cited a non-creationist paper but it does provide mathematical validity to my assertion. My point was by using the principle of Maximum Entropy it is not necessary to demonstrate specific functionality in the genome. Assuredly, the utility of such information is relevant but not to the basic assertion that DNA is an information storage system. I am eventually going in that direction
I'm not entirely clear who you are talking about here, do you mean Stephen Meyer? If so then I'd ask what that clear definition is because all the ones I have seen are pretty much a mess and he seems to dismiss commonly used ones like Shannon entropy or Kolmogorov complexity.
This is actually getting ahead of the point a bit but I might go to Dembski’s specified Complexity argument where he mathematically quantifies specified complexity in the genome using chance hypothesis. Specifically expressed:
Working on it...
Interesting you mentioned Kolmogorov complexity in your example can you explain why?
Edited by zaius137, : A Newbe
Edited by zaius137, : No reason given.
Edited by zaius137, : double Newbe...

This message is a reply to:
 Message 223 by Wounded King, posted 05-08-2012 5:01 AM Wounded King has replied

Replies to this message:
 Message 225 by Wounded King, posted 05-08-2012 3:04 PM zaius137 has replied

  
zaius137
Member (Idle past 3409 days)
Posts: 407
Joined: 05-08-2012


Message 226 of 314 (661605)
05-08-2012 4:19 PM
Reply to: Message 225 by Wounded King
05-08-2012 3:04 PM


Re: creating "information" is either easy or irrelevant
Well how broad? I probably couldn't think of more than a dozen IDist/creationists at most putting forward distinct definitions of information. I would suggest that RAZD's criticisms would cover a substantial proportion of them, as would the similar issues I raised.Lee Spetner, Werner Gitt, William Dembski, Durston/Able/Trevors (I'm putting them together since they published papers on this together), Doug Axe, John Sanford, Royal Truman and maybe Ann Gauger. There may be some overlap amongst them as well, 'Complex Specified Information' is pervasive, I haven't done an exhaustive comparison.
As you and I both know, an exhaustive review is very difficult I will go along with you on that.
I don’t think it does, you claim 'explaining the persistence of new information is problematical'. In what way does the paper you cited support this? The paper itself concludes that in the artificial system they study new information could arise rapidly contrary to the predictions of creationists/IDists.
The mathematics is good to a degree but I do disagree with the conclusion of this author as I show in making my point. I do not support the spontaneous introduction of new information. I maintain very low probabilities do not happen when they exceed any reasonable probability bound.
Because it is one of the metrics which Meyer discussed in 'Signature in the Cell', and which he rejected in favour of 'Complex Specified Information', though I'm not sure if he means quite the same thing by this as Dembski.
Here is where you confuse me a bit. I have attended presentations by Myer and do not see a conflict in his exposition of, Specified Complexity. In fact, Dembski’s formulation is actually based on Kolmogorov complexity, Meyer quoted it often. Of course, I could be wrong about Myers complete support on this so for the sake of accuracy please cite that passage you refer.
Hence the question about why you mention Kolmogorov complexity.
As soon as I figure out how to serve some of these images from maybe a free web server I will start to include the actual formula.
Edited by zaius137, : No reason given.
Edited by zaius137, : No reason given.

This message is a reply to:
 Message 225 by Wounded King, posted 05-08-2012 3:04 PM Wounded King has seen this message but not replied

Replies to this message:
 Message 227 by PaulK, posted 05-08-2012 4:34 PM zaius137 has replied

  
zaius137
Member (Idle past 3409 days)
Posts: 407
Joined: 05-08-2012


Message 228 of 314 (661609)
05-08-2012 5:24 PM
Reply to: Message 227 by PaulK
05-08-2012 4:34 PM


Re: creating "information" is either easy or irrelevant
Hi Paulk,
Then you fail to understand even the role of specification in Dembski's argument. Low probability events happen all the time. And where in the paper does it require any event below your probability bound ?
About your first part... some parts maybe.
Show me where specified events of low probability happen all the time. I believe very small probabilities are not comprehended because they are never encountered in our everyday lives.
Here is some simple perspective:
What follows may be an oversimplified example (I know it is so don't bother) but the scale is valid. Take an enormous number like 1.0 times 10 to the 415th number of atoms. Place them in a Trader Joes bag, if that were possible, and mark a single atom that is placed with the others. You have a special set of tweezers that you may pick out a single atom from any ware in that bag. This would place a single chance of a correct selection at 1 in 10 to the 415th. But you are allowed to make selections from the bag once every second for 10 to the 25th seconds (a billion times longer than the age of the universe since the big bang). This still leaves you with one last choice from a pool of atoms that consist of still about 10 to 415th atoms (choices were not significantly reduced). Well this to me does not seem likely given there is estimated to be 10 to the 80th atoms in the know universe. The single next selection must take place from a volume of at least 4 times the magnitude of atoms in the entire universe. In a single universe where is that atom to be found? Maybe it is in one of the silicon atoms in that screen in front of you or maybe a hydrogen atom in the Crab Nebula?
By the way, I am not a big fan of Dembski However, here are his Premises:
Here is Dembski’s Premise.
Premise 1: LIFE has occurred.
Premise 2: LIFE is specified.
Premise 3: If LIFE is due to chance, then LIFE has small probability.
Premise 4: Specified events of small probability do not occur by chance.
Premise 5: LIFE is not due to regularity.
Premise 6: LIFE is due to regularity, chance, or design.
Conclusion: LIFE is due to design.
"Dembski's proposed test is based on the Kolmogorov complexity of a pattern T that is exhibited by an event E that has occurred. Mathematically, E is a subset of Ω, the pattern T specifies a set of outcomes in Ω and E is a subset of T. Quoting Dembski[16]"
The Wiki
No, it isn't. In fact the lower the Kolmogorov complexity, the better the specification.
Entropy only specifies the amount of bits required to encode information (Shannon Entropy). Hence, the lower the number (lower entropy) implies more organization; fewer bits are required thus lower entropy. Entropy 101
Edited by zaius137, : Newbe has not mastered basic skill set.

This message is a reply to:
 Message 227 by PaulK, posted 05-08-2012 4:34 PM PaulK has replied

Replies to this message:
 Message 229 by jar, posted 05-08-2012 5:36 PM zaius137 has not replied
 Message 230 by PaulK, posted 05-08-2012 5:40 PM zaius137 has not replied
 Message 231 by Panda, posted 05-08-2012 9:48 PM zaius137 has not replied
 Message 234 by RAZD, posted 05-09-2012 10:57 AM zaius137 has not replied

  
zaius137
Member (Idle past 3409 days)
Posts: 407
Joined: 05-08-2012


Message 236 of 314 (661723)
05-09-2012 4:39 PM
Reply to: Message 233 by RAZD
05-09-2012 8:24 AM


Re: creating "information" is either easy or irrelevant
Hi RAZD,
I like your detailed replies and I see a challenge in addressing your thoughtful points. There is a lot to catch up on here and I hope the other participants understand why I cannot get right to the arguments they present although they are just as challenging.
I would like to start with one of the posts you presented and cited namely the 6 points you made.
1. The calculation is a mathematical model of reality and not the reality itself. When a model fails to replicate reality it is not reality that is at fault but the mathematical model. When a hurricane prediction program crashes because it can't model the first hurricane in the South Atlantic on record, the meteorologists don't go out to the hurricane and say "you can't be here, our model does not allow you to be here" ... they fix the model by looking for and taking out the failed assumptions (ie - that all hurricanes are north of the equator). When a model fails to model reality it is a good indication that some aspect of reality has been missed in the model.
The very mathematical models that scientists use to uphold evolution are the very same principles that evolutionists use. If you claim failure in a general sense of mathematical models, you remove the argument from science.
2. The calculation fails to account for the known pre-existing molecules used in the formation of life that are found throughout the universe, and this failure means the calculation with creation-all-at-once including these molecules is unnecessarily extended downward, starting with too much simplicity.
Not all the necessary molecules are present, for instance cytosine is not found in meteorites. The sugar that bonds to the four bases to form the ribonucleotides is very short lived in nature. Many problems exit with the RNA worldview and the SRPs, I hope we can cover them fully. The science has never demonstrated empirically that anything but an all at once approach is possible.
3. The calculation fails to account for the fact that the first life need not be as complicated as a modern cell, that the minimum configuration is much simpler as shown by the LUCA studies. This failure means that the calculation is unnecessarily extended upward, ending with too much complexity.
To date the idea of a LUCA has proven an intractable problem in biology. I have just read a paper of statistical verification of the LUCA by Theobald based on the Markovian substitution model. The claims of Theobald that a LUCA is statistically proven are criticized amongst scientists (not many of which are creationists). I have my own unanswered questions about that paper.
4. The calculation fails to account for combinations of groups of such molecules in smorgasbord fashion instead of in assembly line fashion all at once all from nothing. And further, that all those "failed" experiments are still available to be cut and reassembled into new experiments without having to go through all the preliminaries. It fails to account for actual combination process as used in natural assembly of large organic compounds. Amino acids are assembled into larger molecules like peptides and not from extending amino acids by adding atoms. This failure means that all the ways to reach the final necessary combination are not included and thus it unnecessarily excludes possible combination methods.
Can a failed experiment be available in a new experiment? I think this statement speculates about the stability of the product. I cannot deny if there is intention to preserve some organic molecules from degradation, then yes the experiment can continue. However, natural chemistry has shown no intent to do so. In fact, equilibrium rules the day in natural chemistry. As far as the spontaneous assembly of amino acids are concerned Millers experiments demonstrate a Chirality problem.
5. The probability of winning a lottery by any one ticket is extremely low, but the probability that the lottery will be won is extremely high. How do you reconcile these two very disparate probabilities? By knowing that any one of the millions of tickets is a valid winner if picked..
Well, in low larger ranges of probability I would agree with you, say 1 in 10^6 or 1 in 10^15. However, probabilities in the range of 1 in 10^1000th are not possible given the acceptance that our universe is limited ( I refer to a universal bound of possibilities). Acceptance of limits, say in calculus are necessary in producing an outcome, even in physics (Plank length, Plank time etc.). I suggest that Dembski’s limit would be acceptable in biology.
6. Finally, the improbability of a thing occurring is not proof of impossibility of it occurring.
I can refer you to my objection in point 5 but I think you might benefit by some perspective on the matter. Please comment on my message 228 Please excuse my lack of forum knowledge I am still a Newbe.

This message is a reply to:
 Message 233 by RAZD, posted 05-09-2012 8:24 AM RAZD has replied

Replies to this message:
 Message 237 by Panda, posted 05-09-2012 6:36 PM zaius137 has not replied
 Message 238 by RAZD, posted 05-09-2012 6:39 PM zaius137 has not replied

  
zaius137
Member (Idle past 3409 days)
Posts: 407
Joined: 05-08-2012


Message 239 of 314 (661784)
05-10-2012 3:28 AM
Reply to: Message 233 by RAZD
05-09-2012 8:24 AM


Re: creating "information" is either easy or irrelevant
RAZD my friend
Are you measuring information or entropy? Does a change in entropy mean a change in information or vice versa? If there is no direct link one to the other then talking about a metric for entropy is not talking about a metric for information ... in which case it is irrelevant to the issue of information, yes?
Not at all,Here is the Wiki demonstrating the relationship between entropy and information.
In information theory, entropy is a measure of the uncertainty associated with a random variable. In this context, the term usually refers to the Shannon entropy, which quantifies the expected value of the information contained in a message, usually in units such as bits.
Entropy (information theory) - Wikipedia
In other words, creating information is easy, yes?
Can you walk me threw how exactly you can draw a conclusion about the ease of creating information from that citation?
So with "little is known about the information" in the original system or in the altered system then you have not shown any change in information, one way or the other, by using entropy, yes?
I think you are missing an important point here. By quantifying the entropy of DNA (by the principle of maximum entropy), you are exposing it to statistical methodology without the necessity of particular knowledge of its function. Correct me if I am wrong but Delta entropy does not enter into the evaluation. This paper might be able to clarify what is going on here (it helped me)
Here we focus on estimating entropy from small-sample data, with applications
in genomics and gene network inference in mind (Margolin et al., 2006; Meyer et al.,2007).
http://arxiv.org/pdf/0811.3579.pdf
Are we talking about entropy as used in physics or are we talking about a different use of the word, and if so what is the definition for it.
The (classic physics) entropy in a biological organism can obviously increase and decrease as the organism grows or dies. Does this mean that information also increases and decreases?
From that question, I believe you might be on the wrong track.
Specifically entropy as defined by Shannon in information theory.
H(0)= -P(0)log(2)P(0)-P(1)log(2)P(1)
... The transition is rapid, demonstrating that information gain can occur by punctuated equilibrium.
This could also just be an artifact of the selection process used in the simulation, condensing the time-line artificially as compared to the effects of selection in the biological systems.
OK
The fact that evolution of new traits is not inhibited to me is proof that information is either easy to increase or irrelevant.
A question have we observed new traits evolve?
Have new unique gene sequences ever been observed to arise spontaneously in genomes? Are changes in an organism only because of adaptation (microevolution)? What is the molecular mechanism for evolution?
I would like to review two claims about new innovative functions that supposedly evolved, one to E. coli and the other to a strain of Flavobacterium (nylon eating bacteria). I maintain that these in no way indicate anything but microevolution.
E. coli
In the case of E. coli adapting to metabolize citrate, that function has been present all along in E. coli and is not innovative. Under certain conditions E. coli (low oxygen conditions) can utilize citrate. Lenski’s 20 year experiment with E. coli is only demonstrating an adaptation by E. coli.
Previous research has shown that wild-type E. coli can utilize citrate when oxygen levels are low. Under these conditions, citrate is taken into the cell and used in a fermentation pathway. The gene (citT) in E. coli is believed to encode a citrate transporter (a protein which transports citrate into the cell).
Klaas Pos, et al., The Escherichia coil Citrate Carrier citT: A Member of a Novel Eubacterial Transporter Family Related to the 2-oxoglutarate/malate Translocator from Spinach Chloroplasts, Journal of Bacteriology 180 no. 16 (1998): 4160—4165.
Thus, wild-type E. coli already have the ability to transport citrate into the cell and utilize itso much for the idea of a major innovation and evolution.
A Poke in the Eye? | Answers in Genesis
Nylonase
Nylon eating bacteria is just another case of programmed adaptation. There are six open reading frames in DNA that code for proteins. The proposed mechanism was a single point mutation and a supposed gene duplication event that triggered an open reading frame shift. The entire process was restricted to the very mechanisms that allow adaptation in bacteria for different food sources.
This discovery led geneticist Susumu Ohno to speculate that the gene for one of the enzymes, 6-aminohexanoic acid hydrolase, had come about from the combination of a gene duplication event with a frame shift mutation. Ohno suggested that many unique new genes have evolved this way.
Nylon-eating bacteria - Wikipedia
Thus, contrary to Miller, the nylonase enzyme seems pre-designed in the sense that the original DNA sequence was preadapted for frame-shift mutations to occur without destroying the protein-coding potential of the original gene. Indeed, this protein sequence seems designed to be specifically adaptable to novel functions.
Why Scientists Should NOT Dismiss Intelligent Design – Uncommon Descent
In past research, there is supporting evidence for the suggestion that these open reading frame segments and existing gene duplication events are the main mechanisms for new functionality.
The mechanism of gene duplication as the means to acquire new genes with previously nonexistent functions is inherently self limiting in that the function possessed by a new protein, in reality, is but a mere variation of the preexisted theme.
Birth of a unique enzyme from an alternative reading frame of the preexisted, internally repetitious coding sequence. - PMC
http://www.ncbi.nlm.nih.gov/...345072/pdf/pnas00609-0153.pdf
Dr. Jim Shapiro, Chicago, Natural Genetic Engineering -- the Toolbox for Evolution: Prokaryotes
My question to the evolutionist if no new spontaneous segments of genes arise in genomes how are species gaining unique sequences of DNA. By unique I am not referring to genome duplications.
Evolution's mutation mechanism does not explain how growth of a genome is possible. How can point mutations create new chromosomes or lengthen a strand of DNA? It is interesting to note that, in all of the selective breeding in dogs, there has been no change to the basic dog genome. All breeds of dog can still mate with one another. People have not seen any increase in dog's DNA, but have simply selected different genes from the existing dog gene pool to create the different breeds.
Question 1: How Does Evolution Add Information? - How Evolution Works | HowStuffWorks

This message is a reply to:
 Message 233 by RAZD, posted 05-09-2012 8:24 AM RAZD has replied

Replies to this message:
 Message 240 by Dr Adequate, posted 05-10-2012 4:40 AM zaius137 has replied
 Message 245 by RAZD, posted 05-10-2012 5:30 PM zaius137 has replied

  
zaius137
Member (Idle past 3409 days)
Posts: 407
Joined: 05-08-2012


Message 247 of 314 (661875)
05-10-2012 6:36 PM
Reply to: Message 240 by Dr Adequate
05-10-2012 4:40 AM


Re: Information
Well, if your choice is Shannon entropy, then creating information is easy. Any insertion would do it, since the insertion increases the number of bits in the genome, and since the content of these bits is not completely predictable from their context.
You are really going to have to go into the math here. I do not see where increasing the size of the genome varies the probability estimation of that genome. Remember if Shannon entropy is increased, the number of bits that express the uncertainty will increase but the implied information content decreases. Uncertainty goes up implied information goes down.

This message is a reply to:
 Message 240 by Dr Adequate, posted 05-10-2012 4:40 AM Dr Adequate has replied

Replies to this message:
 Message 250 by Dr Adequate, posted 05-10-2012 7:36 PM zaius137 has not replied
 Message 251 by Percy, posted 05-11-2012 6:52 AM zaius137 has replied

  
zaius137
Member (Idle past 3409 days)
Posts: 407
Joined: 05-08-2012


Message 248 of 314 (661878)
05-10-2012 6:40 PM
Reply to: Message 245 by RAZD
05-10-2012 5:30 PM


Re: STILL OFF TOPIC
I apologize RAZD...
I thought I could bring all this together but I see the subject is extremely diffuse.

This message is a reply to:
 Message 245 by RAZD, posted 05-10-2012 5:30 PM RAZD has seen this message but not replied

  
zaius137
Member (Idle past 3409 days)
Posts: 407
Joined: 05-08-2012


Message 249 of 314 (661880)
05-10-2012 6:44 PM
Reply to: Message 241 by caffeine
05-10-2012 5:38 AM


Re: Logic
Thank you...

This message is a reply to:
 Message 241 by caffeine, posted 05-10-2012 5:38 AM caffeine has not replied

  
zaius137
Member (Idle past 3409 days)
Posts: 407
Joined: 05-08-2012


Message 252 of 314 (662072)
05-12-2012 2:57 AM
Reply to: Message 251 by Percy
05-11-2012 6:52 AM


Re: Information
When uncertainty is greatest concerning the state of the next bit to be communicated is when the most information is exchanged.
My last point on this and I believe this may tie into this thread
The uncertainties or probabilities directly produce resulting entropy (Shannon entropy). For instance, a fair coin toss has entropy of one (to transmit the outcome of a fair coin toss you need one bit). An unfair coin toss (say 70% heads and 30% of the time its tails) has entropy of about (.88). The entropy is less because the outcome is more certain. A perfectly predictable outcome has lowest entropy. As in cybernetics, information can reduce entropy. Consequently, I am implying an inverse relationship between information and entropy.
cybernetics
The science or study of communication in organisms, organic processes, and mechanical or electronic systems replication of natural systems: the replication or imitation of biological control systems with the use of technology.
As Wiener (1954) explains, just as entropy is a measure of disorganization, the information carried by a set of messages is a measure of organization (p. 17). In other words, information can reduce entropy.
Communication | College of Media, Communication and Information | University of Colorado Boulder
This relationship, as I encountered it, is presented in a book by A.E. Wilder-Smith and is an ultimate test for an intelligent designer.
http://www.wildersmith.org/library.htm
All appreciation to those holding PhDs in the field of mathematics but I would really like a citation relating to their point.
Edited by zaius137, : Spelling

This message is a reply to:
 Message 251 by Percy, posted 05-11-2012 6:52 AM Percy has replied

Replies to this message:
 Message 253 by PaulK, posted 05-12-2012 5:55 AM zaius137 has not replied
 Message 254 by Percy, posted 05-12-2012 7:38 AM zaius137 has replied

  
zaius137
Member (Idle past 3409 days)
Posts: 407
Joined: 05-08-2012


Message 255 of 314 (662116)
05-12-2012 1:22 PM
Reply to: Message 254 by Percy
05-12-2012 7:38 AM


Re: Information
Remember where Shannon entropy is most appropriate. It gives insight to how many bits needed to covey an independent variable by communications. As the randomness of that variable increases (less innate information), the number of bits needed to convey that variable increases (increase in bits to convey that randomness).
Information decreases (innate information of variable), Entropy Increases (bits to express variable)
This relationship holds in communications and in biology.
quote:
The basic concept of entropy in information theory has to do with how much randomness is in a signal or in a random event. An alternative way to look at this is to talk about how much information is carried by the signal.
As an example consider some English text, encoded as a string of letters, spaces and punctuation (so our signal is a string of characters). Since some characters are not very likely (e.g. 'z') while others are very common (e.g. 'e') the string of characters is not really as random as it might be. On the other hand, since we cannot predict what the next character will be, it does have some 'randomness'. Entropy is a measure of this randomness, suggested by Claude E. Shannon in his 1949 paper A Mathematical Theory of Communication.
http://www.wordiq.com/definition/Shannon_entropy
Edited by zaius137, : No reason given.

This message is a reply to:
 Message 254 by Percy, posted 05-12-2012 7:38 AM Percy has replied

Replies to this message:
 Message 256 by Percy, posted 05-13-2012 7:57 AM zaius137 has replied
 Message 260 by Dr Adequate, posted 05-14-2012 2:31 AM zaius137 has not replied

  
zaius137
Member (Idle past 3409 days)
Posts: 407
Joined: 05-08-2012


Message 257 of 314 (662238)
05-14-2012 12:59 AM
Reply to: Message 256 by Percy
05-13-2012 7:57 AM


Re: Information
As my odds of guessing the next letter have risen from 1/9 to 1/6 to 1/1, in other words as the randomness has declined, the entropy of each next letter and the information communicated has also declined. It is a direct relationship.
I think we are viewing the same elephant from different angles. When you say randomness of the system declined, I say innate information of the system has increased. Yes, then the number of bits needed to quantify the system would decrease. Information of the system increases entropy decreases (it is an inverse relationship from that perspective).
I believe I can say we agree here

This message is a reply to:
 Message 256 by Percy, posted 05-13-2012 7:57 AM Percy has replied

Replies to this message:
 Message 258 by PaulK, posted 05-14-2012 2:23 AM zaius137 has not replied
 Message 259 by Dr Adequate, posted 05-14-2012 2:25 AM zaius137 has not replied
 Message 261 by Percy, posted 05-14-2012 9:00 AM zaius137 has replied

  
zaius137
Member (Idle past 3409 days)
Posts: 407
Joined: 05-08-2012


Message 262 of 314 (662306)
05-14-2012 3:20 PM
Reply to: Message 261 by Percy
05-14-2012 9:00 AM


Re: Information
Percy my friend we are making headway.
Here's an example of how you're thinking about information. We have a book on our computer that contains information. We run the book through a program that randomly scrambles all the characters. You think the book now has less information, and that's where you've gone wrong.
The fact of the matter is that the book now has more information than it had before because we're less able to predict the next character. For example, if I saw the letter "q" in the original book I would know that the next letter was "u". When I find out that the next letter is "u" I haven't learned anything. No information has been communicated.
But if I saw the letter "q" in the scrambled book I would have no idea what the next letter could be. When I find out the next letter is "f" I have learned something I could not possibly have known. Information has definitely been communicated.
Your original point was that "creationists like Myers" have defined "information in the genome", but you have as yet offered no evidence whatsoever of this, and the fact that you yourself misunderstand information underscores this point.
quote:
Although entropy is often used as a characterization of the information content of a data source, this information content is not absolute: it depends crucially on the probabilistic model.
http://turing.une.edu.au/~cwatson7/I/ConditionalEntropy.html.
I still think that you are confusing the information in the data source with the methode and result of Shannon Entropy (the amount of information needed to transmit that information). Remember I said the entire exercise of using shannon entropy was to expose a system containing innate information to the power of statistics. Also I mentioned that the principle use of maximum entropy was to avoid the problem of not knowing what exactly that information is (contained in the source data).
I don’t think you are argueing that the use of Shannon Entropy can not be used to infer information. But you are rather unshue about what the problemistic model might be assesing.
The entire validity for using Shannon Entropy is how you define the probability.
In your book example, scrambling the letters of the entire book will not change the entropy if your probability is broad enough. You might ask where the actual information is in that book. The information in that book would be conveyed in the order of the characters or letters forming the words and sentences. Therefore, if you set the probability to the level order in the words then your entropy would certainly change. Please read the following citation and you will see that order is a consideration in the genome.
http://pnylab.com/pny/papers/cdna/cdna/index.html

This message is a reply to:
 Message 261 by Percy, posted 05-14-2012 9:00 AM Percy has replied

Replies to this message:
 Message 263 by PaulK, posted 05-14-2012 3:42 PM zaius137 has not replied
 Message 264 by Percy, posted 05-14-2012 4:36 PM zaius137 has replied
 Message 265 by Dr Adequate, posted 05-14-2012 6:59 PM zaius137 has not replied

  
zaius137
Member (Idle past 3409 days)
Posts: 407
Joined: 05-08-2012


Message 266 of 314 (662364)
05-15-2012 3:13 AM
Reply to: Message 264 by Percy
05-14-2012 4:36 PM


Re: Information
Percy my friend,
quote:
For example, specifying the outcome of a fair coin flip (two equally likely outcomes) provides less information (lower entropy) than specifying the outcome from a roll of a die (six equally likely outcomes).
The comparison here is between a dice throw and a coin flip. The coin flip needs only a one-bit transmission to convey the message. Whereas the dice roll takes 2.6 bits of transmission to convey the message portion. The provides less information only refers to the transmission information.
Remember I acknowledge that the information in the message is independent of the amount of information that is required to transmit the message.
I am directly implying that the less uncertainty in the message directly implies more information in the message; if you like greater negentropy.
"Negentropy" is a term coined by Erwin Schrdinger in his popular-science book "What is life?" (1943).
quote:
Schrdinger introduced that term when explaining that a living system exports entropy in order to maintain its own entropy at a low level. By using the term "Negentropy", he could express this fact in a more "positive" way: A living system imports negentropy and stores it.
Edited by zaius137, : No reason given.
Edited by zaius137, : No reason given.
Edited by zaius137, : No reason given.
Edited by zaius137, : Edit is not taking.
Edited by zaius137, : No reason given.

This message is a reply to:
 Message 264 by Percy, posted 05-14-2012 4:36 PM Percy has replied

Replies to this message:
 Message 267 by PaulK, posted 05-15-2012 4:40 AM zaius137 has not replied
 Message 268 by Dr Adequate, posted 05-15-2012 6:00 AM zaius137 has not replied
 Message 269 by Percy, posted 05-15-2012 9:12 AM zaius137 has replied

  
zaius137
Member (Idle past 3409 days)
Posts: 407
Joined: 05-08-2012


Message 270 of 314 (662417)
05-15-2012 1:48 PM
Reply to: Message 269 by Percy
05-15-2012 9:12 AM


Re: Information
Percy my friend,
Great but what about the unfair coin? The message is set is still {0,1} but what is the entropy of say the probability of 70% heads and of 30% tails (more predictable higher negentropy) is...
= -(.7) log2 (.7)- (.3) log2 (.3)
= -(.7) (-.52) — (.3) (-1.7)
= (.36) + (.51)
= (.87) bits
Which demonstrates the minimum amount of bits to transmit the message and gives the theoretical limit of compression. About your hard drive data, it can be processed by a number of compression algorithms and transmitted by fewer bits (my guess).

This message is a reply to:
 Message 269 by Percy, posted 05-15-2012 9:12 AM Percy has replied

Replies to this message:
 Message 271 by PaulK, posted 05-15-2012 2:06 PM zaius137 has replied
 Message 272 by Percy, posted 05-15-2012 9:29 PM zaius137 has not replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024