Register | Sign In


Understanding through Discussion


EvC Forum active members: 58 (9206 total)
0 online now:
Newest Member: Fyre1212
Post Volume: Total: 919,412 Year: 6,669/9,624 Month: 9/238 Week: 9/22 Day: 0/9 Hour: 0/0


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   How do "novel" features evolve?
PaulK
Member
Posts: 17906
Joined: 01-10-2003
Member Rating: 7.2


Message 271 of 314 (662418)
05-15-2012 2:06 PM
Reply to: Message 270 by zaius137
05-15-2012 1:48 PM


Re: Information
So a biased coin produces less information. And a two-headed coin would produce no information...
Now, are you ever going to actually explain your point ?

This message is a reply to:
 Message 270 by zaius137, posted 05-15-2012 1:48 PM zaius137 has replied

Replies to this message:
 Message 274 by zaius137, posted 05-16-2012 12:29 AM PaulK has replied

  
Percy
Member
Posts: 22929
From: New Hampshire
Joined: 12-23-2000
Member Rating: 7.2


Message 272 of 314 (662441)
05-15-2012 9:29 PM
Reply to: Message 270 by zaius137
05-15-2012 1:48 PM


Re: Information
Hi Zaius,
Your position is that entropy and information have an inverse relationship. I showed that for equiprobable messages that this is precisely backwards for message sets of any size. How in your imagination does your example of a 70/30 2-message set contradict this? You didn't even attempt to draw any comparisons to other message set sizes with non-equiprobable distributions.
If you keep other things roughly equal with similar probability distributions then you'll find that for message sets of any size entropy and information have a direct relationship. Here are some examples:
ProbabilitiesInformation (bits)Entropy
{.5, .5}11
{.25, .5, .25}1.61.5
{.1, .4, .4, .1}21.7
{.1, .2, .4, .2, .1}2.32.1
This table just goes from 1 bit to 1.6 bits to 2 bits to 2.3 bits, and the direct relationship between information and entropy is clear. If I instead went from 1 bit to 10 bits to 100 bits to 1000 bits the relationship would become even more clear, as if that's necessary.
Why don't you put together your own table showing us the incredibly different probability distributions you have to use before you can get a consistently declining entropy with increasing message set size?
AbE: Just for the heck of it, the entropy for an equiprobable message set of size 10 is 3.3, for size 100 is 6.6, for size 1000 is 10.0, for size 10,000 is 13.2, for size 100,000 is 16.6, for 1,000,000 is 20.0, etc., etc., etc. Get the idea?
--Percy
Edited by Percy, : AbE.

This message is a reply to:
 Message 270 by zaius137, posted 05-15-2012 1:48 PM zaius137 has not replied

  
NoNukes
Inactive Member


(1)
Message 273 of 314 (662453)
05-15-2012 10:37 PM
Reply to: Message 269 by Percy
05-15-2012 9:12 AM


Re: Information
The amount of information required to transmit some information content over a lossless channel is equal to the amount of information content. If you have a 2 megabyte disk file on your computer then it will take 2 megabytes of information to transmit that file over a lossless channel.
The latter sentence cannot be right. Not every 2 megabyte disk file contains 2 megabytes of information. For example, a compressed text file contains the same information in both the compressed and uncompressed format. Yet the rules governing the translation from one format to another might require only a few bytes to transmit.
I'll note that the latter quoted statement does not follow from the former statement above regarding the amount information required to transmit an amount of information across a channel.
I have a vague recollection that we have discussed this issue before.
Of course the above should suggest that information and entropy cannot be inversely related as the entropy in a compressed and uncompressed file are different.

Under a government which imprisons any unjustly, the true place for a just man is also in prison. Thoreau: Civil Disobedience (1846)
The apathy of the people is enough to make every statue leap from its pedestal and hasten the resurrection of the dead. William Lloyd Garrison

This message is a reply to:
 Message 269 by Percy, posted 05-15-2012 9:12 AM Percy has replied

Replies to this message:
 Message 279 by Percy, posted 05-16-2012 7:41 AM NoNukes has replied

  
zaius137
Member (Idle past 3658 days)
Posts: 407
Joined: 05-08-2012


Message 274 of 314 (662466)
05-16-2012 12:29 AM
Reply to: Message 271 by PaulK
05-15-2012 2:06 PM


Re: Information
PaulK my friend,
I am not a mathematician by any sense of the word but believe that it is not that the unfair coin posses less information but my understanding is that the unfair coin is more predictable. Therefore, it takes less information to transmit the result of the toss. I have received some criticism that I am using the term information incorrectly, when I refer to the message possessing more information. This criticism may be justified so I have been using the term negentropy instead.
Anyway, I am trying to express that a more predictable message would posses less entropy thus the transmission of that message would require less bits.
What is your opinion?

This message is a reply to:
 Message 271 by PaulK, posted 05-15-2012 2:06 PM PaulK has replied

Replies to this message:
 Message 276 by PaulK, posted 05-16-2012 1:37 AM zaius137 has not replied
 Message 278 by Percy, posted 05-16-2012 7:31 AM zaius137 has not replied

  
zaius137
Member (Idle past 3658 days)
Posts: 407
Joined: 05-08-2012


Message 275 of 314 (662470)
05-16-2012 12:52 AM
Reply to: Message 1 by RAZD
03-08-2012 10:47 PM


Re: what is novel?
RAZD my friend,
Entropy is not my favorite subject so I will try and get back onto the theme of your original post How do novel features evolve?
I will ask if novel features can evolve at all, given that there are not enough mutations to create them. Let me ask the following question
Are there enough random mutations to create new innovative features between humans and chimps?
To illustrate my point, I will calculate de novo the time needed for the supposed Pan Human divergence with current findings of mutation rate and divergence between chimp and human genes.
Commonly the date accepted by evolutionists is 5 million years. I will show that 5 million years is impossibly short given the actual evidence.
The formula comes from a paper by Michael W. Nachman and Susan L. Crowell and was originally used to estimate the needed mutation rate per generation to change a common ancestor hominid into a human in 5 million years. I believe the formulation arrived at 175 mutations per generation but there has been an empirical adjustment to the calculated rate of only 60 mutations per generation (about 1/3). In addition, the mutation divergence of autosomal Pseudogenes has been raised slightly too about 3% between chimps and humans.
The formula k= 2(u)t+4Ne(u) (variables defied below)
My 3% is form
quote:
... on closer inspection we differ by 1.2% in the functional genes that code for proteins. And we also differ by about 3% in the non-coding DNA regions, so called "junk DNA" - although this phrase seems to be losing meaning as some of these regions regulate genes and possess as yet unknown functions. So overall we can say that chimp and human DNA is about 96% identical - which is still very close. If you were to lay both genomes out side by side you would see that base for base they are 96% similar.
https://www.brighthub.com/science/genetics/articles/34219/
60 mutations per generation from:
http://www.geneticarchaeology.com/...d_of_human_mutation.asp
The Average Human Has 60 New Genetic Mutations - Slashdot
I will work the equation in reverse inserting the known mutation rate per generation in humans and the newer pseudogene divergence.
t= .5(k/u-4Ne)
t= number of generations since divergence (Generation =20 years)
k= percentage of sequence divergence Estimated at 3%
Ne= effective size of population (10^4)
(u)=mutation rate (9x10^-9) (60/7x10^9)
From Estimate of the Mutation Rate per Nucleotide in Humans | Genetics | Oxford Academic
(t) In generations is 1.65 million or 33 million years since the human chimp divergence
In other words, new evidence in genetic divergence between chimps and humans and the measured mutation rate in humans has pushed the possible divergence back to 33 million years. You cannot change a monkey into a man.
Edited by zaius137, : No reason given.

This message is a reply to:
 Message 1 by RAZD, posted 03-08-2012 10:47 PM RAZD has seen this message but not replied

Replies to this message:
 Message 277 by Dr Adequate, posted 05-16-2012 4:23 AM zaius137 has not replied
 Message 281 by Wounded King, posted 05-16-2012 11:13 AM zaius137 has replied

  
PaulK
Member
Posts: 17906
Joined: 01-10-2003
Member Rating: 7.2


Message 276 of 314 (662475)
05-16-2012 1:37 AM
Reply to: Message 274 by zaius137
05-16-2012 12:29 AM


Re: Information
quote:
I am not a mathematician by any sense of the word but believe that it is not that the unfair coin posses less information but my understanding is that the unfair coin is more predictable.
Then you are really confused. Your are confusing the coin with the act of tossing it, and you are confusing pure chance with communication. And even worse, you are confusing predictability with information. It seems very clear to me that you have an idea in your head but you haven't thought it out - or even considered the points I've raised.
quote:
Therefore, it takes less information to transmit the result of the toss. I have received some criticism that I am using the term information incorrectly, when I refer to the message possessing more information. This criticism may be justified so I have been using the term negentropy instead.
And why is this "negentropy" important ? Please tell me what it is about a two-headed coin that makes it more complex or more ordered than a normal coin in any significant way ? For instance if we simply consider the coins themselves, shouldn't a normal coin with it's differing designs on obverse and reverse be considered as offering more information than the two-headed coin rather than less ?

This message is a reply to:
 Message 274 by zaius137, posted 05-16-2012 12:29 AM zaius137 has not replied

  
Dr Adequate
Member
Posts: 16113
Joined: 07-20-2006


Message 277 of 314 (662489)
05-16-2012 4:23 AM
Reply to: Message 275 by zaius137
05-16-2012 12:52 AM


Re: what is novel?
In other words, new evidence in genetic divergence between chimps and humans and the measured mutation rate in humans has pushed the possible divergence back to 33 million years. You cannot change a monkey into a man.
According to your own calculations, you can do so in about 33 million years. Like the woman in the joke, all you're haggling about is the price.
60 mutations per generation from ...
... a study involving precisely two children.
Commonly the date accepted by evolutionists is 5 million years.
No. They do not, in fact, commonly accept a date.
(u)=mutation rate (9x10^-9)
That's not what I got using your own figure for the number of mutations. It should be mutations/base pairs = 1.76 x 10-8.
This alone halves the number of generations.
---
But the other thing ... have you thought about what you're trying to achieve here? The figures biologists use to estimate the time of divergence are based precisely on the sort of genetic data you're using.
You seem to be reasoning: "evolutionists say it took five million years, if I can show that it would take 35 million years, then it would never have fit into the evolutionists' time-frame."
But if it could be shown definitively and to everyone's satisfaction that it would take 35 million years, then evolutionists would immediately with one voice say that that was how long it took. Because their estimate for how long it did take is based on their estimate of how long it would take. That's where their time frame comes from in the first place. If we make a new measurement of the mutation rate, that changes how long it would have taken, and it also, simultaneously, changes how long evolutionists think it took, because it's the same thing.

This message is a reply to:
 Message 275 by zaius137, posted 05-16-2012 12:52 AM zaius137 has not replied

  
Percy
Member
Posts: 22929
From: New Hampshire
Joined: 12-23-2000
Member Rating: 7.2


Message 278 of 314 (662497)
05-16-2012 7:31 AM
Reply to: Message 274 by zaius137
05-16-2012 12:29 AM


Re: Information
szius137 writes:
I am not a mathematician by any sense of the word but believe that it is not that the unfair coin posses less information but my understanding is that the unfair coin is more predictable.
This is true. As predictability increases, entropy decreases. When the probabilities for the two sides of the coin become {1, 0} then the predictability is 1 and the entropy is 0. You can choose probabilities for the two sides of a coin that will cause the entropy to range between 0 and 1. You can choose probabilities for a three sided die that will cause the entropy to range between 0 and 1.6. You can choose probabilities for a four sided die that will cause entropy to range between 0 and 2. You can choose probabilities for a five sided die that will cause entropy to range between 0 and 2.3. The lowest possible entropy for any message set is always 0. The highest possible entropy for any message set increases with increasing message set size, e.g., increases with increasing information.
Therefore, it takes less information to transmit the result of the toss. I have received some criticism that I am using the term information incorrectly, when I refer to the message possessing more information. This criticism may be justified so I have been using the term negentropy instead.
According to Wikipedia, in information theory "Negentropy measures the difference in entropy between a given distribution and the Gaussian distribution with the same variance." Is this what you really mean to talk about?
Anyway, I am trying to express that a more predictable message would posses less entropy thus the transmission of that message would require less bits.
Yes, this is true, the lower the entropy the greater can be the information density. How does this support your position about adding information to a genome?
--Percy

This message is a reply to:
 Message 274 by zaius137, posted 05-16-2012 12:29 AM zaius137 has not replied

  
Percy
Member
Posts: 22929
From: New Hampshire
Joined: 12-23-2000
Member Rating: 7.2


Message 279 of 314 (662499)
05-16-2012 7:41 AM
Reply to: Message 273 by NoNukes
05-15-2012 10:37 PM


Re: Information
Hi NoNukes,
I know, I know, I know. You know too much about computers. Forget all the overhead involved in disk storage and data transmission. Maybe I should have stuck with the book example.
Anyway, the point is that measures of information are the same regardless whether the information is in some static form or is being transmitted. Zaius was trying to argue that information and entropy have a direct relationship in one and an inverse relationship in the other.
Zaius has cheered your post and ignored mine, so he must have interpreted your post as a successful rebuttal of my argument for a direct relationship between information and entropy.
--Percy

This message is a reply to:
 Message 273 by NoNukes, posted 05-15-2012 10:37 PM NoNukes has replied

Replies to this message:
 Message 280 by NoNukes, posted 05-16-2012 11:12 AM Percy has replied
 Message 284 by zaius137, posted 05-17-2012 2:25 PM Percy has replied

  
NoNukes
Inactive Member


Message 280 of 314 (662518)
05-16-2012 11:12 AM
Reply to: Message 279 by Percy
05-16-2012 7:41 AM


Re: Information
Zaius has cheered your post and ignored mine, so he must have interpreted your post as a successful rebuttal of my argument for a direct relationship between information and entropy.
I'm not sure why Zaius would think that. I did include a rebuttal of that argument. Maybe he just likes the idea that you were incorrect about something.
I wasn't referring to overhead and such. I'm referring to files that are compressible. Like 2 Megabyte files containing all zeroes as an extreme example. The information content of such a file is very small and we can transmit that information across a lossless channel in only a few bytes, even if we count the bytes describing the decompression algorithm.
Edited by NoNukes, : No reason given.

Under a government which imprisons any unjustly, the true place for a just man is also in prison. Thoreau: Civil Disobedience (1846)
The apathy of the people is enough to make every statue leap from its pedestal and hasten the resurrection of the dead. William Lloyd Garrison

This message is a reply to:
 Message 279 by Percy, posted 05-16-2012 7:41 AM Percy has replied

Replies to this message:
 Message 283 by Percy, posted 05-16-2012 2:21 PM NoNukes has seen this message but not replied

  
Wounded King
Member (Idle past 281 days)
Posts: 4149
From: Cincinnati, Ohio, USA
Joined: 04-09-2003


(1)
Message 281 of 314 (662519)
05-16-2012 11:13 AM
Reply to: Message 275 by zaius137
05-16-2012 12:52 AM


Mutation rates
I think there are several issues with the numbers you are choosing to plug into the equation.
For a start you seem to be ascribing the 3% value to autosomal pseudogenes on no basis whatever. I don't see a single mention of pseudogenes, autosomal or otherwise, in the article you took it from or indeed any source in that article to let us know where the author got that particular value from.
Autosomal pseudogenes and 'Junk DNA' are not synonyms, you can't simply declare that what applies to one applies to the other. So you don't have a reliable figure for 'new pseudogene divergence' which makes the rest of the exercise rather pointless.
Can you explain why you think it is reasonable to ascribe this 3% figure to autosomal pseudogenes? The whole point of using the pseudogenes was to have a sequence set for which neutrality was a reasonable assumption. Can you assure us that this is similarly the case for the DNA that the 3% divergence was derived from?
It is also worth bearing in mind that Nachman and Crowell's (2000) figure is only based on a small sample of 12 autosomal pseudogenes and only included length mutations up to 4bp while indels can be 100s of bp in length. While the frequency of longer length mutations is lower their effect on divergence can be radical. So if a significant proportion of that 3% divergence comes from longer indels then I'd question how appropriate Nachman and Crowell's figure is or alternatively how reasonable it is to try and estimate any divergence time from a sequence divergence estimate including variably long indels. This is why most divergence estimates rely on single nucleotide substitution rates alone.
In the Conrad paper (2011) they themselves suggest a divergence time of ~7MYA based on their new mutation rate data and that is in the light of more up to date sequence divergence estimates than Nachman and Crowell or your 2009 article. They also give a range of calculated average mutation rates (1.18 10−8 (0.15 10−8 (s.d.)) and you seem to have chosen to recalculate to reach a value below the lower end of that range.
I don't really see how this is much more relevant to the topic. Especially since the calculations are based on what are considered neutral regions of the genome that we wouldn't expect to be where we would find adaptive features, novel or otherwise. Without knowing what the mutation rate for adaptive substitutions is and the divergence based on adapative sites you seem to be addressing another question entirely, namely whether there is enough time between the human-chimp divergence to account for the genetic divergence we see based on current estimates of mutation rates. I don't see where novel features come into it, especially if we accept that novel features including protein coding genes can arise de novo from single step mutations a phenomenon for which there is now considerable evidence in many species including 60 such putative genes in humans (Wu et al., 2011).
TTFN,
WK

This message is a reply to:
 Message 275 by zaius137, posted 05-16-2012 12:52 AM zaius137 has replied

Replies to this message:
 Message 289 by zaius137, posted 05-17-2012 6:06 PM Wounded King has replied

  
Wounded King
Member (Idle past 281 days)
Posts: 4149
From: Cincinnati, Ohio, USA
Joined: 04-09-2003


(1)
Message 282 of 314 (662524)
05-16-2012 12:09 PM
Reply to: Message 1 by RAZD
03-08-2012 10:47 PM


The origin of novel genes, a related issue.
While responding to Zauis latest posts I came across an interesting review article that discusses many of the molecular bases for novel genes and their phenotypic impact.
The paper is "Origins, evolution, and phenotypic impact of new genes" by Henrik Kaesmann (2010).
It covers coding and non-coding sequences, complete de novo origination, chimaeric origination, segmental duplication, the action of transposable elements and many more mechanisms. It isn't exactly a 'For beginners guide' but there is a lot of good stuff.
Directly relating to the opening post Kaessmann has this to say on a specific phenoypic effect of a gene duplication ...
Kaessmann, 2010 writes:
Another intriguing recent case of new retrogene formation illustrates the far-reaching and immediate phenotypic consequences a retroduplication event may have. Parker et al. (2009) found that a retrocopy derived from a growth factor gene (fgf4) is solely responsible for the short-legged phenotype characteristic of several common dog breeds. Remarkably, the phenotypic impact of the fgf4 retrogene seems to be a rather direct consequence of the gene dosage change associated with its emergence (i.e., increased FGF4 expression during bone development), given that its coding sequence is identical to that of its parental gene. The analysis of fgf4 in dogs thus strikingly illustrates that gene duplication can immediately lead to phenotypic innovation (in this case a new morphological trait) merely through gene dosage alterations.
TTFN,
WK

This message is a reply to:
 Message 1 by RAZD, posted 03-08-2012 10:47 PM RAZD has seen this message but not replied

Replies to this message:
 Message 285 by zaius137, posted 05-17-2012 3:01 PM Wounded King has replied

  
Percy
Member
Posts: 22929
From: New Hampshire
Joined: 12-23-2000
Member Rating: 7.2


Message 283 of 314 (662535)
05-16-2012 2:21 PM
Reply to: Message 280 by NoNukes
05-16-2012 11:12 AM


Re: Information
NoNukes writes:
I wasn't referring to overhead and such. I'm referring to files that are compressible. Like 2 Megabyte files containing all zeroes as an extreme example. The information content of such a file is very small and we can transmit that information across a lossless channel in only a few bytes, even if we count the bytes describing the decompression algorithm.
The example was just to illustrate the simple principle that increasing information and increasing entropy go hand in hand regardless whether the information is static or in the process of being communicated. If it helps, assume a pre-compressed 2 Mb file.
Further fault can be found with the example. A book or file is not really a message but a sequence of many messages where each message is a single character. Compression in this context is not a question of representing each individual message (character) more efficiently (although that possibility could be explored, too) but of encoding the original messages into a new message set so that the resulting sequence of messages is smaller in terms of the number of total bits.
But the topic was the relationship between information and entropy, which is not inverse but direct. If your equiprobable message set size is 4 then the amount of information in each message is 2 bits. If you introduce redundancy so that it takes 100 bits to transmit a single message, the amount of information transmitted is still 2 bits, and the entropy is still 2.
--Percy

This message is a reply to:
 Message 280 by NoNukes, posted 05-16-2012 11:12 AM NoNukes has seen this message but not replied

  
zaius137
Member (Idle past 3658 days)
Posts: 407
Joined: 05-08-2012


Message 284 of 314 (662635)
05-17-2012 2:25 PM
Reply to: Message 279 by Percy
05-16-2012 7:41 AM


Re: Information
I rather not dwell on entropy but I cannot let you go on thinking I have no point.
My point is that lower entropy implies that the message has more information. I can see now that my original argument posed no direct explanation of a correlation. Let me make another attempt at that point by appealing to two thought experiments (Maxwell’s demon and Szilard’s engine).
These two thought experiments relate how information can lower entropy.
quote:
Szilard's engine
A neat physical thought-experiment demonstrating how just the possession of information might in principle have thermodynamic consequences was established in 1929 by Le Szilrd, in a refinement of the famous Maxwell's demon scenario.
Let me go a bit further with this thought and mention a direct mathematical correlation between entropy and information by Brillouin. It is a refinement on the negentropy/information idea.
quote:
I = K ln P
where I denotes information, K is a constant, and P is the probability of the outcome. Brillouin reasons that with these types of probability arguments, it enables one to solve the problem of Maxwell’s demon and to show a very direct connection between information and entropy and that the thermodynamic entropy measures the lack of information about a physical system. Moreover, according to Brillouin, whenever an experiment is performed in the laboratory, it is paid for by an increase in entropy, and a generalized Carnot principle states that the price paid in increase of entropy must always be larger than the amount of information gained. Information, according to Brillouin, corresponds to negative entropy or negentropy, a term he coined.
So tell me again how information does not correlate inversely to entropy? Probability of the message increases, information in the message increases, entropy decreases.

This message is a reply to:
 Message 279 by Percy, posted 05-16-2012 7:41 AM Percy has replied

Replies to this message:
 Message 286 by Dr Adequate, posted 05-17-2012 3:06 PM zaius137 has replied
 Message 294 by Percy, posted 05-17-2012 8:18 PM zaius137 has replied

  
zaius137
Member (Idle past 3658 days)
Posts: 407
Joined: 05-08-2012


Message 285 of 314 (662641)
05-17-2012 3:01 PM
Reply to: Message 282 by Wounded King
05-16-2012 12:09 PM


Re: The origin of novel genes, a related issue.
Hi again Wounded King,
I found this paper interesting in that it relies implicitly on the pre existence of the genetic code. So how did the duplicated gene sequence occur in the first place given no original gene sequence?
quote:
Based on cytological observations of chromosomal duplications, Haldane (1933) and Muller (1935) already hypothesized in the 1930s that new gene functions may emerge from refashioned copies of old genes, highlighting for the first time the potential importance of gene duplication for the process of new gene origination.
Origins, evolution, and phenotypic impact of new genes - PMC
Remember Haldane’s fixation of new functional genes only happens one every 300 generations. This would provide only about 1700 new beneficial functions fixed in humans since the Pan Human divergence! Are there only about 1700 functional mutations between humans and chimps? This is known as Haldane’s dilemma.
This paper only speaks from the assumption that common descent is unquestionably correct. Nothing is further from the truth.
I found this paper rather thin in explanations. However, I thank you for the citation.

This message is a reply to:
 Message 282 by Wounded King, posted 05-16-2012 12:09 PM Wounded King has replied

Replies to this message:
 Message 287 by Dr Adequate, posted 05-17-2012 3:12 PM zaius137 has replied
 Message 288 by Wounded King, posted 05-17-2012 5:49 PM zaius137 has replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024