|
Register | Sign In |
|
QuickSearch
Thread ▼ Details |
Member (Idle past 7577 days) Posts: 634 From: Washington, USA Joined: |
|
Thread Info
|
|
|
Author | Topic: Give your one best shot - against evolution | ||||||||||||||||||||||||
Percy Member Posts: 22392 From: New Hampshire Joined: Member Rating: 5.2 |
Hi, Fred! Welcome back!
Fred Williams writes: If an acceptable definition of evolution is change in allele frequency in a population over time, then isn't Joe's example of the interplay between malaria and sickle-cell anemia independent of whether your point is correct? --Percy
|
||||||||||||||||||||||||
Percy Member Posts: 22392 From: New Hampshire Joined: Member Rating: 5.2 |
This is a valid point for this thread, I was only pointing out that it didn't bear on the point Joe was making. Maleria and sickle-cell anemia fulfilled Philip's request for an example of human evolution involving illnesses. Whether the change involved a gain or loss of information is irrelevant, it's still evolution.
--Percy
|
||||||||||||||||||||||||
Percy Member Posts: 22392 From: New Hampshire Joined: Member Rating: 5.2 |
Fred Williams writes: When Joe says, "'information' is such a nebulous term" he doesn't mean "information is nebulous", but that he's not sure how you're defining it. Can't have a discussion if you don't agree on terminology. --Percy
|
||||||||||||||||||||||||
Percy Member Posts: 22392 From: New Hampshire Joined: Member Rating: 5.2 |
Hi Fred!
I remember we had a discussion on information theory as evidence against evolution a few years ago, although I wasn't one of the primary participants. You argued that random mutation cannot create new information, and that since new phenotypes can only come into existence through the addition of information to the genome that therefore evolution is impossible. One of the problems with discussing your underlying premise that random mutation cannot create new information is that the discussion quickly bogs down in arguments about the correct definition of information. In no time at all no knows what anyone else is talking about (pretty ironic for a discussion about information An alternative approach would be to see if we can find examples of randomness creating new phenotypes. Such an example would lead us to suspect problems with the underlying premise, regardless of how anyone defines information. We can actually create our own example to test your premise by creating a simple model of the evolutionary process drawing upon your own field of computer engineering. Imagine we have a state machine with only two bits of information, and the desired behavior is a ring counter, ie, 0->1->2->3->0...etc. The next state of each bit in our state machine is a function dependent upon the states of the two bits: fi = ((p1i0s0 || n1i1ns1) && (p1i1s1 || n1i1ns1)) || where: fi is the next state of bit i This implements a simple PLA. In this evolutionary model the bits are the organism while the coefficients are the genome. Reproduction occurs when ten copies are made of our "organism", and mutation is represented by modifying a single coefficient in each "offspring". The impact of the environment on the organisms is modeled by a checker which measures how well each offspring performs the ring counter function by clocking them each four times and using a weighted measure to assess how well it performs the count. The best offspring is selected to become the parent of ten offspring in the next generation, and the rest are discarded. The evaluation function: e = E(n=0,3) abs(S(n+1)%4 - Sn) - 1 where: E(n=0,3) is summation, vary n from 0 to 3 I've written a simple C++ program to do this: Ring Counter Evolution Here's some sample output. The five numbers are the count sequence produced in each generation: Best in generation 1: 0-0-0-0-0 The interesting thing is that since the changes to the coefficients are random, each time you run the program you get a different result. It once achieved a ring count in only 16 generations, and the longest took over 200 generations. And the ring function can be realized with more than one set of coefficients. For example, here are all the different ways the program's evolution implemented a ring counter in terms of the coefficients: Ring count function achieved: 0001-0111 0110-1001 In other words, just like in real-world evolution there is more than one way to accomplish the same goal. This evolutionary model demonstrates that random mutation can create new phenotypes, and if you believe that new phenotypes require new information it therefore falsifies the original premise that random mutation cannot create new information. --Percy
|
||||||||||||||||||||||||
Percy Member Posts: 22392 From: New Hampshire Joined: Member Rating: 5.2 |
Fred Williams writes: A example of a new algorithm developing from random mutation was provided in Message 142. --Percy
|
||||||||||||||||||||||||
Percy Member Posts: 22392 From: New Hampshire Joined: Member Rating: 5.2 |
Mark writes: I think you've reduced the key contradiction to its crux. Fred says Gitt-information rules out information gain, but if function loss == information loss, then by necessity function gain == information gain. Since we can demonstrate function gain, Gitt-information theory is falsified. --Percy
|
||||||||||||||||||||||||
Percy Member Posts: 22392 From: New Hampshire Joined: Member Rating: 5.2 |
Fred Williams writes: It's a model of evolution doing precisely what you said Gitt-information says is impossible, namely develop a new algorithm from random mutation. Your specific objections appear to have little to do with information theory, which I thought was the basis of your objections to evolution, but addressing them anyway:
First, this objection based upon Shannon misunderstands Shannon, whose work dealt with communication of information over channels and did not address the issue of new information. He *did* address the issue of what constituted communicating information, along the lines of saying that you can't tell someone something he already knows. Second, the C++ program is just a model. Like any other model of natural processes you can modify and improve it to better model reality. If you'd like to the model to have a different probability of success then simply change it. One easy way is to reduce the number of terms for the next state of each bit from two to one.
First, if this had any validity it would rule out all modeling, from weather to flight paths of spacecraft to nuclear particle physics. Second, it's just a model. The desired pattern can also be generated randomly, removing your irrelevant objection that it is predetermined. Third, evolution also has a predetermined goal determined by the environment. Fourth, no intelligence produced the coefficients, which are the equivalent of information in the model. They were generated randomly. The program has no idea what the right coefficients are, and certainly I have none.
Then just change the model. It wouldn't affect the outcome other than to require more generations.
You already said this in point 2.
Thanks for the help, Fred. Always appreciated! The bottom line is that random mutation combined with environmental, sexual and other types of selection are sufficient to generate new algorithms, and you can easily model this in computer simulations just like you can model scores of other natural processes. Obviously your interpretation of information theory has somewhere gone astray. --Percy [This message has been edited by Percipient, 07-09-2002]
|
||||||||||||||||||||||||
Percy Member Posts: 22392 From: New Hampshire Joined: Member Rating: 5.2 |
Fred Williams writes: This is obviously false since it leads to contradictory conclusions, for instance that a new algorithm inserted by humans through gene splicing is information, while the identical algorithm added through random mutation is not information.
This is the same as roasting the organism over a fire to give it the new function of food, and is way outside the framework of the discussion, which was within a genomic context. Obviously Gitt cannot rationally argue that subtracting information causes function loss but that adding information cannot cause function gain. That makes no sense. After all, if you subtract information thereby removing its corresponding function, then restoring the information must restore the corresponding function. Add to this the above mentioned contradictory conclusion of Gitt-information concerning what constitutes information and there's not much left. You seem to have many restrictions having nothing to do with information theory. For example, the function must be useful. What could information theory possibly know or care about whether information is useful? Or that an algorithm is not a new algorithm if it represents a modification to a pre-existing algorithm instead of coming into being all at once like some form of immaculate conception.
It's okay to argue for your point of view, but let's keep the representations of science straight. This is your own evangelical view, and certainly nowhere remotely close to any accepted view within science.
You say information science says this is impossible, but your views on information theory have been shown to lead to contradictions.
Join a big crowd of Creationists who are right there with you in the crowd of denial. Brushing aside the problem does not make it go away. Creationism is a fairytale, folks! (this one's for you, Fred). --Percy
|
||||||||||||||||||||||||
Percy Member Posts: 22392 From: New Hampshire Joined: Member Rating: 5.2 |
Fred Williams writes: It certainly *is* an algorithm, and I expressed it mathematically in Message 142. But more importantly, there is nothing in information theory that says information can only be algorithms. That is Fred theory.
An intelligent sender is I think your requirement. First, there is nothing in information theory that requires the sender be intelligent. Defining intelligence all by itself would be an insurmountable problem, and is not addressed within information theory. Our space probes send us plenty of information, and they're not intelligent. Second, the origination of new information does not require intelligence, either, for the same reason. Shannon's approach depends upon random generation of information. Random mutation fits perfectly within Shannon's model.
I'm just the guy running the experiment, the observer watching the show. I'm not part of the proceedings. The information is not being communicated to me but to the organism.
You said this last time, and I already answered. If you'd like a lower probability of success to more accurate reflect the real world then simply improve the model. As I already said, the easiest way is simply to reduce the number of terms from two to one.
I merely set up the experiment by defining an analog for the environment in the form of a required sequence, and even the sequence could have been randomly defined. I have no way of determining or predicting what information the program will produce, and it didn't come from me.
This is Fred science, not info science. Information theory has no requirement that senders be intelligent. As mentioned early, the problem of defining intelligence is a thorny and difficult one, certainly not reducible at this time to the mathematical rigor of information theory. A definition of intelligence is not part of information theory.
"Begin Paste" from what source? Excerpt ignored pending identification of source.
My C++ model of random mutation creating a new algorithm clearly falsifies the claim that random mutation cannot create new information. If it can happen in a computer model it most certainly can happen in nature.
It's easy to make claims which are difficult to falsify. I claim there are invisible ethereal aliens among us, prove me wrong. Your misinterpretation of information theory is itself unsupported by evidence and is already contradicted by simple models.
Then what possible difference could it make whether a new function is added by gene splicing or mutation? If humans add a gene to produce new function then it's information gain, but if random mutation adds an identical gene then it's not information gain? This is a serious contradiction. Percy writes: Fred replies: Subjective concepts like "useful" and "good" are not part of information theory. For information to be transmitted it is only necessary that it be unknown to the receiver.
I never said this - have you checked your own head lately? Shannon developed information theory in order to characterize with mathematical rigor the maximum amount of information that could be communicated over a channel taking into account losses due to noise. Obviously information loss is part of information theory.
The original algorithm isn't at the same locus in the offspring, only the parent.
This dictionary analogy doesn't describe evolution. To do that you have to postulate an evolving language (which we have) where dictionary publishers strive to stay current with the latest usages. One way, a slow way, a publisher could try to stay abreast is by allowing random errors to creep into his dictionary (mutation), with reviews by knowledgeable linguists to discard dictionaries with less useful definitions (selection). Don't get too detailed in criticizing this improved form of your analogy, I'm just trying to work with the raw material you provided. I wouldn't myself have attempted a dictionary analogy for evolution.
But I do! Just not today with you.
Glad you understand that science has not "shown overwhelmingly that genomes are deteriorating." Scientists are unlikely to believe something that has no positive evidence with plenty of evidence for the converse. --Percy
|
||||||||||||||||||||||||
Percy Member Posts: 22392 From: New Hampshire Joined: Member Rating: 5.2 |
Fred Williams writes: The point itself isn't important, but it is important in another way because it *does* indicate you don't understand the model your critiquing. The coefficients are the analog of genes and are the product of random mutation, not the product of the algorithm. What the algorithm produces is the expression of the organism within the environment in the form of an integer sequence, precisely analogous to biological organisms. The reason this is significant is independent of any confusions you may be suffering about information theory. The model illustrates random changes effecting improvement in the organism through selection, precisely what you claim information theory says is impossible. Therefore you misunderstand information theory. This is clear on its face when you make ridiculous claims such as that intelligence is part of information theory, which it most certainly is not. We can't define intelligence with any mathematical rigor today, and we couldn't define it back in the 1950s when Shannon did his seminal work. You've gotten way out in left field. Randomness and uncertainty are at the core of IT. Since you can't communicate information to anyone who already possesses that information, the process is essentially the ability to predict the next bit in a stream. To the extent that next bit is predictable it is not information. The greater the degree of unpredictability the greater its potential information content. Where you discuss Shannon's paper, I think you've confused the definition of information with approaches for reducing error introduced by noise when communicating information across a channel.
Lucky hit? How could there be a lucky hit if information theory really rendered it impossible? A bit of equivocation, Fred? About information loss, I don't understand why you're pressing me about it as if I thought it couldn't happen. My focus is on your erroneous assertion that information theory rules out beneficial changes stemming from random mutation. --Percy
|
||||||||||||||||||||||||
Percy Member Posts: 22392 From: New Hampshire Joined: Member Rating: 5.2 |
Fred Williams writes: We may have a terminology problem here. When information is transmitted, that information is old from the point of view of the transmitter (eg, "My birthday is..."), but it would be new information from the point of view of the receiver. We should probably be careful how we use these terms. I suggest "new information" for information received that we didn't already possess, and "creation of information" for information that is original. Using this terminology, the bacteria receiving the gene now possesses new information that it did not have before. Gene transfer is a type of mutation. This mutation has added information to the bacteria's genome that was not previously there, and this bacteria may possibly have a new function that it did not previously possess. This mutation may presumably also arise through random mutation rather than through gene transfer, in which case we have creation of information. For example, keeping things simple, say the new gene is the nucleotide sequence AGCT, and it gets added to the bacterial genome through gene transfer from a different but related bacterial species and provides it a new and beneficial function. This information is new to the bacteria. The gene could also have been added through random reproduction errors over time, which would be creation of information.
I think this is another terminology issue. A slight modification to an algorithm that allows it to count to 11 instead of 10 is different, or it's new, or it's modified, pick your term, but it is not the same algorithm. In the real world a slight modification to a gene in a cheetah may enable it to run at 61 mph instead of just 60. Is the change something new? Something modified? Something different? Pick whatever label you like, the genonmic change has made the cheetah more effective at pursuing prey, which you're claiming is impossible. For evidence of this happening, namely a functional change being traced to a specific mutation, you need go no further than the fields of bacteriology and virology. The AIDS virus is a good example. Some positive AIDS mutations (for it, not for us) are so probable they happen over and over and over again. --Percy
|
||||||||||||||||||||||||
Percy Member Posts: 22392 From: New Hampshire Joined: Member Rating: 5.2 |
SLPx quotes Kimura: Fred replies: Either there is more to Kimura's point, eg, some kind of qualification related to equating new genetic information with permutational recombinations of existing alleles, or eg, the first part of the sentence that was excised mentions additional mechanisms, etc, or I have to share Fred's skepticism that natural selection alone can create new genetic information. --Percy
|
||||||||||||||||||||||||
Percy Member Posts: 22392 From: New Hampshire Joined: Member Rating: 5.2 |
Fred writes: You still misunderstand the model. The bits are the organism, the sequence of states within those bits is the expression of the organism within the environment, the coefficients are the genes which control the sequence of states, and the source of mutation of the coefficients is a random number generator. The coefficients, ie, the genes, control the next state of each bit, which is the expression of the organism within the environment, just as our own genes control the expression of ourselves in our own environments. The algorithm to determine the next state of each bit is simply a sum of terms where the terms are a function of the values of the coefficients. As I told you originally, it's basically a PLA with states, ie, a state machine. In case it helps, here's the link to the C++ program again: Ring Counter Evolution
You seem to be operating under some strange illusion that the scientific world is in alarm hoping for a solution to the seemingly intractable problem of how evolution could possibly happen when information theory says it's impossible. The reality is that the only people who think there's a problem are a few Christian evangelicals. One doesn't win a Nobel Prize for demonstrating the obvious.
You're still way out in left field with this intelligence business. First, to answer a related point you make, no, computers and spacecraft are not intelligent. Second, the transmission of information does not require an intelligent sender. When you look up at the stars there are no intelligent aliens out there sending the starlight, yet there's so much information in that light that we've been able to deduce the processes of the stellar furnace and the age of the universe. While it took intelligence to make sense of the information, it took no intelligence at all to either send or receive it. Shannon's model of transmitting information only requires a sender, a communications channel and a receiver, and as expressed in his paper these were automated, not intelligent, devices. You're only confusing yourself when you attempt to add value laden judgments like "utility" to information theory. A telegraph clerk during WWII receives a coded cipher from the underground, "The pebbles fall lightly," and he has no idea what it means. It is useless to him. Does that mean no information was transmitted? He brings the message to his superiors, who check their cipher books and learn that the message means to schedule a parachute drop of supplies that night. In your view no information was transmitted from the underground to the clerk, and not even from the clerk to his superiors since the clerk didn't understand the message, but only from the codebook to the clerk's superiors? Is this like the immaculate conception of information, where it suddenly springs forth from no source? Or is it the quantum uncertainty theory of information, where information isn't really information until someone who understands it examines it? I don't think so. You'll only be able to start making sense of information theory by removing from your perspective those elements which are value judgments, such as what is useful and what is intelligence. They play no role in information theory. Percy writes Fred replies: Fred, I don't know where you're picking up this strange interpretation, but of course I think information can be lost. Either you're trying to make some non-obvious point known only to yourself, or you're not paying attention. If it helps to get through this particular red herring of a point, simply eliminate a nucleotide, a gene, a chromosome, an organism which possesses the last existing copy of a particular allele. But this is irrelevant. The point being challenged is your assertion that information theory rules out the possibility of random mutation creating new genetic information. Of course mutation can create new information. What could possibly prevent it? --Percy
|
||||||||||||||||||||||||
Percy Member Posts: 22392 From: New Hampshire Joined: Member Rating: 5.2 |
Hi Fred!
Sorry to split this into two replies, but another thought came to me about one point you made:
I don't know who Terra or the "Terra crowd" are, or who the evolutionists you're thinking of are, but this all seems like the most obvious quackery. We create computer models of natural processes all the time. We know DNA is the blueprint for the organism, we understand how reproduction works at a genetic level, we know reproductive errors of various sorts occur, yet you somehow think creating a model of this process is so incredibly difficult and thorny a problem that not only have others struggled with it and failed, but that a solution is worthy of a Nobel Prize? In reality, creating such a model is easy. It takes about an hour. Mine is a very simple model because it only addresses your specific point that random mutation cannot create new information. Creating such models only becomes rocket science to someone determined to believe the process being modeled is impossible. --Percy
|
||||||||||||||||||||||||
Percy Member Posts: 22392 From: New Hampshire Joined: Member Rating: 5.2 |
Thanks, Gene, found it. Terra Nornia is just one of many worlds created for the Creatures game, which apparently has versions 1, 2 and 3, and none of which I know anything about. If there's a "Terra Crowd" using the Creatures game to do serious modeling of mutation and natural selection I couldn't find it. This seems more like a game than a serious simulation to me, more on the order of Sims. The websites are full of pictures like this:
--Percy [This message has been edited by Percipient, 07-13-2002]
|
|
|
Do Nothing Button
Copyright 2001-2023 by EvC Forum, All Rights Reserved
Version 4.2
Innovative software from Qwixotic © 2024