Register | Sign In


Understanding through Discussion


EvC Forum active members: 65 (9162 total)
5 online now:
Newest Member: popoi
Post Volume: Total: 915,815 Year: 3,072/9,624 Month: 917/1,588 Week: 100/223 Day: 11/17 Hour: 0/0


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   Give your one best shot - against evolution
Percy
Member
Posts: 22391
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 96 of 224 (12483)
07-01-2002 2:45 PM
Reply to: Message 95 by Fred Williams
07-01-2002 2:09 PM


Hi, Fred! Welcome back!
Fred Williams writes:

It never ceases to amaze me that evolutionists will use a desease such as sickle-cell anemia as an example of evolution in action! Sickle-cell is de-evolution. It represents a clear loss of information at the genetic level.
If an acceptable definition of evolution is change in allele frequency in a population over time, then isn't Joe's example of the interplay between malaria and sickle-cell anemia independent of whether your point is correct?
--Percy

This message is a reply to:
 Message 95 by Fred Williams, posted 07-01-2002 2:09 PM Fred Williams has replied

Replies to this message:
 Message 102 by Fred Williams, posted 07-01-2002 5:50 PM Percy has replied

  
Percy
Member
Posts: 22391
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 108 of 224 (12550)
07-02-2002 9:00 AM
Reply to: Message 102 by Fred Williams
07-01-2002 5:50 PM


This is a valid point for this thread, I was only pointing out that it didn't bear on the point Joe was making. Maleria and sickle-cell anemia fulfilled Philip's request for an example of human evolution involving illnesses. Whether the change involved a gain or loss of information is irrelevant, it's still evolution.
--Percy

This message is a reply to:
 Message 102 by Fred Williams, posted 07-01-2002 5:50 PM Fred Williams has replied

Replies to this message:
 Message 109 by Fred Williams, posted 07-02-2002 12:41 PM Percy has not replied

  
Percy
Member
Posts: 22391
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 114 of 224 (12681)
07-03-2002 2:54 PM
Reply to: Message 113 by Fred Williams
07-03-2002 12:30 PM


Fred Williams writes:

Yes, "information is nebulous" is one of the three famous reasons evolutionists give to avoid the information problem that is so devestating for large-scale evolution.
When Joe says, "'information' is such a nebulous term" he doesn't mean "information is nebulous", but that he's not sure how you're defining it. Can't have a discussion if you don't agree on terminology.
--Percy

This message is a reply to:
 Message 113 by Fred Williams, posted 07-03-2002 12:30 PM Fred Williams has replied

Replies to this message:
 Message 115 by Fred Williams, posted 07-04-2002 12:04 AM Percy has not replied
 Message 124 by derwood, posted 07-05-2002 2:59 PM Percy has not replied

  
Percy
Member
Posts: 22391
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 142 of 224 (12925)
07-06-2002 7:36 PM
Reply to: Message 139 by Fred Williams
07-06-2002 2:51 PM


Hi Fred!
I remember we had a discussion on information theory as evidence against evolution a few years ago, although I wasn't one of the primary participants. You argued that random mutation cannot create new information, and that since new phenotypes can only come into existence through the addition of information to the genome that therefore evolution is impossible.
One of the problems with discussing your underlying premise that random mutation cannot create new information is that the discussion quickly bogs down in arguments about the correct definition of information. In no time at all no knows what anyone else is talking about (pretty ironic for a discussion about information
).
An alternative approach would be to see if we can find examples of randomness creating new phenotypes. Such an example would lead us to suspect problems with the underlying premise, regardless of how anyone defines information.
We can actually create our own example to test your premise by creating a simple model of the evolutionary process drawing upon your own field of computer engineering. Imagine we have a state machine with only two bits of information, and the desired behavior is a ring counter, ie, 0->1->2->3->0...etc. The next state of each bit in our state machine is a function dependent upon the states of the two bits:
fi = ((p1i0s0 || n1i1ns1) && (p1i1s1 || n1i1ns1)) ||
((p2i0s0 || n2i1ns1) && (p2i1s1 || n2i1ns1)) (i = 0, 1)
where:
fi is the next state of bit i
p1,2i0,1 is the coefficient for the positive (non-negated) s0,1 bits (two terms)
n1,2i0,1 is the coefficient for the negated s0,1 bits (two terms)
s0,1 are the states of the two bits
ns0,1 is the negated states of the two bits
This implements a simple PLA.
In this evolutionary model the bits are the organism while the coefficients are the genome. Reproduction occurs when ten copies are made of our "organism", and mutation is represented by modifying a single coefficient in each "offspring". The impact of the environment on the organisms is modeled by a checker which measures how well each offspring performs the ring counter function by clocking them each four times and using a weighted measure to assess how well it performs the count. The best offspring is selected to become the parent of ten offspring in the next generation, and the rest are discarded. The evaluation function:
e = E(n=0,3) abs(S(n+1)%4 - Sn) - 1
where:
E(n=0,3) is summation, vary n from 0 to 3
Sn is the two-bit state of the ring counter
n is the numbered states for 4 clock ticks
I've written a simple C++ program to do this: Ring Counter Evolution
Here's some sample output. The five numbers are the count sequence produced in each generation:
Best in generation 1: 0-0-0-0-0
Best in generation 2: 0-0-0-0-0
Best in generation 3: 0-0-0-0-0
Best in generation 4: 0-0-0-0-0
Best in generation 5: 0-0-0-0-0
Best in generation 6: 0-0-0-0-0
Best in generation 7: 0-0-0-0-0
Best in generation 8: 0-0-0-0-0
Best in generation 9: 0-0-0-0-0
Best in generation 10: 0-0-0-0-0
Best in generation 11: 0-0-0-0-0
Best in generation 12: 0-0-0-0-0
Best in generation 13: 0-0-0-0-0
Best in generation 14: 0-0-0-0-0
Best in generation 15: 0-1-2-1-2
Best in generation 16: 0-1-2-1-2
Best in generation 17: 0-1-2-1-2
Best in generation 18: 0-1-2-1-2
Best in generation 19: 0-1-2-3-0
Ring count function achieved: 0010-0111 1001-0110 (the coefficients)
The interesting thing is that since the changes to the coefficients are random, each time you run the program you get a different result. It once achieved a ring count in only 16 generations, and the longest took over 200 generations. And the ring function can be realized with more than one set of coefficients. For example, here are all the different ways the program's evolution implemented a ring counter in terms of the coefficients:
Ring count function achieved: 0001-0111 0110-1001
Ring count function achieved: 0101-0111 1001-0110
Ring count function achieved: 0110-0101 0110-1001
Ring count function achieved: 0110-0101 1001-0110
Ring count function achieved: 0111-0011 0110-1001
Ring count function achieved: 0111-0100 0110-1001
Ring count function achieved: 0111-0101 1001-0110
Ring count function achieved: 1100-0111 0110-1001
Ring count function achieved: 1100-0111 1001-0110
In other words, just like in real-world evolution there is more than one way to accomplish the same goal.
This evolutionary model demonstrates that random mutation can create new phenotypes, and if you believe that new phenotypes require new information it therefore falsifies the original premise that random mutation cannot create new information.
--Percy

This message is a reply to:
 Message 139 by Fred Williams, posted 07-06-2002 2:51 PM Fred Williams has not replied

  
Percy
Member
Posts: 22391
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 166 of 224 (13076)
07-08-2002 5:29 PM
Reply to: Message 165 by Fred Williams
07-08-2002 5:19 PM


Fred Williams writes:

No, evolution absolutely requires, it demands, the appearance of new algorithms to program for new useful features.
A example of a new algorithm developing from random mutation was provided in Message 142.
--Percy

This message is a reply to:
 Message 165 by Fred Williams, posted 07-08-2002 5:19 PM Fred Williams has replied

Replies to this message:
 Message 177 by Fred Williams, posted 07-09-2002 7:48 PM Percy has replied

  
Percy
Member
Posts: 22391
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 171 of 224 (13109)
07-08-2002 10:04 PM
Reply to: Message 170 by mark24
07-08-2002 7:14 PM


Mark writes:

Now, if Gitts definition won’t allow a new function to = new information, then how can you claim that function loss = information loss, whilst maintaining the same standards?
I think you've reduced the key contradiction to its crux. Fred says Gitt-information rules out information gain, but if function loss == information loss, then by necessity function gain == information gain. Since we can demonstrate function gain, Gitt-information theory is falsified.
--Percy

This message is a reply to:
 Message 170 by mark24, posted 07-08-2002 7:14 PM mark24 has not replied

  
Percy
Member
Posts: 22391
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 179 of 224 (13191)
07-09-2002 9:35 PM
Reply to: Message 177 by Fred Williams
07-09-2002 7:48 PM


Fred Williams writes:

There are several problems with your simulation.
It's a model of evolution doing precisely what you said Gitt-information says is impossible, namely develop a new algorithm from random mutation.
Your specific objections appear to have little to do with information theory, which I thought was the basis of your objections to evolution, but addressing them anyway:

1) The chance of success is unity. So even using the Shannon information (the lowest level), your simulation fails to produce new information randomly.
First, this objection based upon Shannon misunderstands Shannon, whose work dealt with communication of information over channels and did not address the issue of new information. He *did* address the issue of what constituted communicating information, along the lines of saying that you can't tell someone something he already knows.
Second, the C++ program is just a model. Like any other model of natural processes you can modify and improve it to better model reality. If you'd like to the model to have a different probability of success then simply change it. One easy way is to reduce the number of terms for the next state of each bit from two to one.

2) You have a pre-determined target. Therefore, any information your simulation produces can only be actuated in the presence of already existing information. That is, by higher intelligence — you. You have programmed the simulation to stop at the pattern you like. Thus, randomness did not produce information, intelligence did.
First, if this had any validity it would rule out all modeling, from weather to flight paths of spacecraft to nuclear particle physics.
Second, it's just a model. The desired pattern can also be generated randomly, removing your irrelevant objection that it is predetermined.
Third, evolution also has a predetermined goal determined by the environment.
Fourth, no intelligence produced the coefficients, which are the equivalent of information in the model. They were generated randomly. The program has no idea what the right coefficients are, and certainly I have none.

3) A minor point since the above already invalidate your argument: As it relates to reality, your simulation (like genetic algorithms and Dawkin’s simulation) employs strict truncation selection, which is extremely unrealistic and simply does not occur in nature.
Then just change the model. It wouldn't affect the outcome other than to require more generations.

I should also note that your simulation did not develop a new algorithm. It developed a pre-determined pattern.
You already said this in point 2.

An algorithm is the same as subroutine, if that helps.
Thanks for the help, Fred. Always appreciated!
The bottom line is that random mutation combined with environmental, sexual and other types of selection are sufficient to generate new algorithms, and you can easily model this in computer simulations just like you can model scores of other natural processes. Obviously your interpretation of information theory has somewhere gone astray.
--Percy
[This message has been edited by Percipient, 07-09-2002]

This message is a reply to:
 Message 177 by Fred Williams, posted 07-09-2002 7:48 PM Fred Williams has not replied

  
Percy
Member
Posts: 22391
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 180 of 224 (13200)
07-09-2002 10:46 PM
Reply to: Message 178 by Fred Williams
07-09-2002 8:43 PM


Fred Williams writes:

As it pertains to our discussion, what Gitt information says is that it is impossible to have a new algorithm (subroutine) arise in the genome without a sender (ie a Programmer).
This is obviously false since it leads to contradictory conclusions, for instance that a new algorithm inserted by humans through gene splicing is information, while the identical algorithm added through random mutation is not information.

Quite easily. By your logic, if your computer explodes into a ball of fire, and you toast marshmellows over it, then it must be new information since its got a new function!
This is the same as roasting the organism over a fire to give it the new function of food, and is way outside the framework of the discussion, which was within a genomic context. Obviously Gitt cannot rationally argue that subtracting information causes function loss but that adding information cannot cause function gain. That makes no sense. After all, if you subtract information thereby removing its corresponding function, then restoring the information must restore the corresponding function.
Add to this the above mentioned contradictory conclusion of Gitt-information concerning what constitutes information and there's not much left.
You seem to have many restrictions having nothing to do with information theory. For example, the function must be useful. What could information theory possibly know or care about whether information is useful? Or that an algorithm is not a new algorithm if it represents a modification to a pre-existing algorithm instead of coming into being all at once like some form of immaculate conception.

Science has shown overwhelmingly that genomes are deteriorating.
It's okay to argue for your point of view, but let's keep the representations of science straight. This is your own evangelical view, and certainly nowhere remotely close to any accepted view within science.

Information science also says that this is impossible.
You say information science says this is impossible, but your views on information theory have been shown to lead to contradictions.

Join a big crowd of evolutionists who are right there with you in the crowd of denial. Brushing aside the problem does not make it go away. Evolution is a fairytale, folks! (that was for Scotty)
Join a big crowd of Creationists who are right there with you in the crowd of denial. Brushing aside the problem does not make it go away. Creationism is a fairytale, folks! (this one's for you, Fred).
--Percy

This message is a reply to:
 Message 178 by Fred Williams, posted 07-09-2002 8:43 PM Fred Williams has replied

Replies to this message:
 Message 189 by Fred Williams, posted 07-10-2002 2:16 PM Percy has replied

  
Percy
Member
Posts: 22391
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 192 of 224 (13284)
07-10-2002 8:25 PM
Reply to: Message 189 by Fred Williams
07-10-2002 2:16 PM


Fred Williams writes:

Again, your simulation does not produce an algorithm.
It certainly *is* an algorithm, and I expressed it mathematically in Message 142.
But more importantly, there is nothing in information theory that says information can only be algorithms. That is Fred theory.

Second, to be specific Gitt information says it is impossible to generate new information via random mutation without a sender.
An intelligent sender is I think your requirement. First, there is nothing in information theory that requires the sender be intelligent. Defining intelligence all by itself would be an insurmountable problem, and is not addressed within information theory. Our space probes send us plenty of information, and they're not intelligent.
Second, the origination of new information does not require intelligence, either, for the same reason. Shannon's approach depends upon random generation of information. Random mutation fits perfectly within Shannon's model.

Now consider your own understanding of Shannon theory...That means you received no new information by your own understanding of Shannon information (you were told something you already knew).
I'm just the guy running the experiment, the observer watching the show. I'm not part of the proceedings. The information is not being communicated to me but to the organism.

Your simulation has a 100% chance to reach your pre-determined target.
You said this last time, and I already answered. If you'd like a lower probability of success to more accurate reflect the real world then simply improve the model. As I already said, the easiest way is simply to reduce the number of terms from two to one.

Look what I said above Thus, randomness did not produce information, intelligence did. You produced the information!
I merely set up the experiment by defining an analog for the environment in the form of a required sequence, and even the sequence could have been randomly defined. I have no way of determining or predicting what information the program will produce, and it didn't come from me.

Info science says that you cannot produce information via randomness & selection without an intelligent sender.
This is Fred science, not info science. Information theory has no requirement that senders be intelligent. As mentioned early, the problem of defining intelligence is a thorny and difficult one, certainly not reducible at this time to the mathematical rigor of information theory. A definition of intelligence is not part of information theory.

---Begin Paste---
"Begin Paste" from what source? Excerpt ignored pending identification of source.

You have not provided any evidence that new algorithms have been produced via random mutation and selection in the natural world.
My C++ model of random mutation creating a new algorithm clearly falsifies the claim that random mutation cannot create new information. If it can happen in a computer model it most certainly can happen in nature.

It takes hard evidence to falsify a claim, not someone’s opinion.
It's easy to make claims which are difficult to falsify. I claim there are invisible ethereal aliens among us, prove me wrong.
Your misinterpretation of information theory is itself unsupported by evidence and is already contradicted by simple models.

Gitt certainly does not argue the above, as adding information clearly causes function gain.
Then what possible difference could it make whether a new function is added by gene splicing or mutation? If humans add a gene to produce new function then it's information gain, but if random mutation adds an identical gene then it's not information gain? This is a serious contradiction.
Percy writes:

For example, the function must be useful.
Fred replies:

Of course it must, or what good is it?
Subjective concepts like "useful" and "good" are not part of information theory. For information to be transmitted it is only necessary that it be unknown to the receiver.

BY your definition, information is gained all the time no matter what! If your brain explodes it's new information because it’s a new function (subroutine: explode_head() )
I never said this - have you checked your own head lately?
Shannon developed information theory in order to characterize with mathematical rigor the maximum amount of information that could be communicated over a channel taking into account losses due to noise. Obviously information loss is part of information theory.

If a mutation occurs to a gene (thereby modifying the algorithm), there still exists another copy at the same locus with the original algorithm.
The original algorithm isn't at the same locus in the offspring, only the parent.

Which is the better algorithm? Let’s return to the dictionary analogy. Do you think that if you are handed an identical dictionary to one you already possess, but it has a typo in it causing some word to be ill-defined, do you really think you have an increase in information?
This dictionary analogy doesn't describe evolution. To do that you have to postulate an evolving language (which we have) where dictionary publishers strive to stay current with the latest usages. One way, a slow way, a publisher could try to stay abreast is by allowing random errors to creep into his dictionary (mutation), with reviews by knowledgeable linguists to discard dictionaries with less useful definitions (selection). Don't get too detailed in criticizing this improved form of your analogy, I'm just trying to work with the raw material you provided. I wouldn't myself have attempted a dictionary analogy for evolution.

Again, you would be better served to argue for gene duplication, followed by mutation.
But I do! Just not today with you.

Perhaps it is an unspoken view by scientists that genomes are deteriorating, but I suspect most believe this.
Glad you understand that science has not "shown overwhelmingly that genomes are deteriorating." Scientists are unlikely to believe something that has no positive evidence with plenty of evidence for the converse.
--Percy

This message is a reply to:
 Message 189 by Fred Williams, posted 07-10-2002 2:16 PM Fred Williams has replied

Replies to this message:
 Message 197 by Fred Williams, posted 07-11-2002 2:43 PM Percy has replied

  
Percy
Member
Posts: 22391
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 200 of 224 (13372)
07-11-2002 5:21 PM
Reply to: Message 197 by Fred Williams
07-11-2002 2:43 PM


Fred Williams writes:

No it isn’t. It’s a pattern, and is a product of an algorithm. This isn’t worth dwelling on, because it’s not important.
The point itself isn't important, but it is important in another way because it *does* indicate you don't understand the model your critiquing. The coefficients are the analog of genes and are the product of random mutation, not the product of the algorithm. What the algorithm produces is the expression of the organism within the environment in the form of an integer sequence, precisely analogous to biological organisms.
The reason this is significant is independent of any confusions you may be suffering about information theory. The model illustrates random changes effecting improvement in the organism through selection, precisely what you claim information theory says is impossible. Therefore you misunderstand information theory.
This is clear on its face when you make ridiculous claims such as that intelligence is part of information theory, which it most certainly is not. We can't define intelligence with any mathematical rigor today, and we couldn't define it back in the 1950s when Shannon did his seminal work. You've gotten way out in left field.
Randomness and uncertainty are at the core of IT. Since you can't communicate information to anyone who already possesses that information, the process is essentially the ability to predict the next bit in a stream. To the extent that next bit is predictable it is not information. The greater the degree of unpredictability the greater its potential information content.
Where you discuss Shannon's paper, I think you've confused the definition of information with approaches for reducing error introduced by noise when communicating information across a channel.

Even if we get an occasionally lucky hit, the odds of it being detected by selection and surviving are very low (no better than 1 in 50, Fischer, Futuyma, et al).
Lucky hit? How could there be a lucky hit if information theory really rendered it impossible? A bit of equivocation, Fred?
About information loss, I don't understand why you're pressing me about it as if I thought it couldn't happen. My focus is on your erroneous assertion that information theory rules out beneficial changes stemming from random mutation.
--Percy

This message is a reply to:
 Message 197 by Fred Williams, posted 07-11-2002 2:43 PM Fred Williams has replied

Replies to this message:
 Message 210 by Fred Williams, posted 07-12-2002 6:36 PM Percy has replied

  
Percy
Member
Posts: 22391
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 203 of 224 (13377)
07-11-2002 6:21 PM
Reply to: Message 199 by Fred Williams
07-11-2002 4:39 PM


Fred Williams writes:

The AIG reference shows the information was transferred from another bacteria, so no new information via randomness.
We may have a terminology problem here. When information is transmitted, that information is old from the point of view of the transmitter (eg, "My birthday is..."), but it would be new information from the point of view of the receiver. We should probably be careful how we use these terms. I suggest "new information" for information received that we didn't already possess, and "creation of information" for information that is original.
Using this terminology, the bacteria receiving the gene now possesses new information that it did not have before. Gene transfer is a type of mutation. This mutation has added information to the bacteria's genome that was not previously there, and this bacteria may possibly have a new function that it did not previously possess.
This mutation may presumably also arise through random mutation rather than through gene transfer, in which case we have creation of information.
For example, keeping things simple, say the new gene is the nucleotide sequence AGCT, and it gets added to the bacterial genome through gene transfer from a different but related bacterial species and provides it a new and beneficial function. This information is new to the bacteria. The gene could also have been added through random reproduction errors over time, which would be creation of information.

Again, its an altered algorithm, not a new one.
I think this is another terminology issue. A slight modification to an algorithm that allows it to count to 11 instead of 10 is different, or it's new, or it's modified, pick your term, but it is not the same algorithm. In the real world a slight modification to a gene in a cheetah may enable it to run at 61 mph instead of just 60. Is the change something new? Something modified? Something different? Pick whatever label you like, the genonmic change has made the cheetah more effective at pursuing prey, which you're claiming is impossible.
For evidence of this happening, namely a functional change being traced to a specific mutation, you need go no further than the fields of bacteriology and virology. The AIDS virus is a good example. Some positive AIDS mutations (for it, not for us) are so probable they happen over and over and over again.
--Percy

This message is a reply to:
 Message 199 by Fred Williams, posted 07-11-2002 4:39 PM Fred Williams has not replied

  
Percy
Member
Posts: 22391
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 207 of 224 (13409)
07-12-2002 8:54 AM
Reply to: Message 201 by Fred Williams
07-11-2002 5:31 PM


SLPx quotes Kimura:

"...natural selection is a mechanism by which new genetic information can be created. Indeed, this is the only mechanism known in natural science which can create it."
Fred replies:

Amazing! I’m curious. Who here truly believes that new genetic information can be created merely by natural selection alone? Any takers, other than Scott?
Either there is more to Kimura's point, eg, some kind of qualification related to equating new genetic information with permutational recombinations of existing alleles, or eg, the first part of the sentence that was excised mentions additional mechanisms, etc, or I have to share Fred's skepticism that natural selection alone can create new genetic information.
--Percy

This message is a reply to:
 Message 201 by Fred Williams, posted 07-11-2002 5:31 PM Fred Williams has not replied

Replies to this message:
 Message 218 by derwood, posted 07-15-2002 2:20 PM Percy has not replied

  
Percy
Member
Posts: 22391
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 211 of 224 (13468)
07-13-2002 2:07 PM
Reply to: Message 210 by Fred Williams
07-12-2002 6:36 PM


Fred writes:

Precisely analogous? Come on now, that is quite a stretch. Is this integer sequence a sequence of instructions? If not, then why do you keep claiming it’s an algorithm?
You still misunderstand the model. The bits are the organism, the sequence of states within those bits is the expression of the organism within the environment, the coefficients are the genes which control the sequence of states, and the source of mutation of the coefficients is a random number generator.
The coefficients, ie, the genes, control the next state of each bit, which is the expression of the organism within the environment, just as our own genes control the expression of ourselves in our own environments. The algorithm to determine the next state of each bit is simply a sum of terms where the terms are a function of the values of the coefficients. As I told you originally, it's basically a PLA with states, ie, a state machine.
In case it helps, here's the link to the C++ program again: Ring Counter Evolution

Regardless, my point earlier was that even if your simulation was producing an algorithm, Terra already attempts this. But evolutionists themselves question the claims made by the Terra crowd and they will question yours. It’s just not what you think it is, Percy. If it was as you boldy claim, you would have a serious shot at the Nobel Prize!
You seem to be operating under some strange illusion that the scientific world is in alarm hoping for a solution to the seemingly intractable problem of how evolution could possibly happen when information theory says it's impossible. The reality is that the only people who think there's a problem are a few Christian evangelicals. One doesn't win a Nobel Prize for demonstrating the obvious.

Percy, I’m sorry but I find it amazing and ironic that you would call ridiculous the undeniable fact that Shannon information requires an intelligent sender.
You're still way out in left field with this intelligence business.
First, to answer a related point you make, no, computers and spacecraft are not intelligent.
Second, the transmission of information does not require an intelligent sender. When you look up at the stars there are no intelligent aliens out there sending the starlight, yet there's so much information in that light that we've been able to deduce the processes of the stellar furnace and the age of the universe. While it took intelligence to make sense of the information, it took no intelligence at all to either send or receive it.
Shannon's model of transmitting information only requires a sender, a communications channel and a receiver, and as expressed in his paper these were automated, not intelligent, devices.
You're only confusing yourself when you attempt to add value laden judgments like "utility" to information theory. A telegraph clerk during WWII receives a coded cipher from the underground, "The pebbles fall lightly," and he has no idea what it means. It is useless to him. Does that mean no information was transmitted? He brings the message to his superiors, who check their cipher books and learn that the message means to schedule a parachute drop of supplies that night. In your view no information was transmitted from the underground to the clerk, and not even from the clerk to his superiors since the clerk didn't understand the message, but only from the codebook to the clerk's superiors? Is this like the immaculate conception of information, where it suddenly springs forth from no source? Or is it the quantum uncertainty theory of information, where information isn't really information until someone who understands it examines it? I don't think so.
You'll only be able to start making sense of information theory by removing from your perspective those elements which are value judgments, such as what is useful and what is intelligence. They play no role in information theory.
Percy writes
About information loss, I don't understand why you're pressing me about it as if I thought it couldn't happen.
Fred replies:

Ah, but I think it gets to the very core of your confusion, and I believe dismantles your logic. Why can’t you provide me one example of genetic loss of information at the genetic level that would satisfy you?
Fred, I don't know where you're picking up this strange interpretation, but of course I think information can be lost. Either you're trying to make some non-obvious point known only to yourself, or you're not paying attention. If it helps to get through this particular red herring of a point, simply eliminate a nucleotide, a gene, a chromosome, an organism which possesses the last existing copy of a particular allele.
But this is irrelevant. The point being challenged is your assertion that information theory rules out the possibility of random mutation creating new genetic information. Of course mutation can create new information. What could possibly prevent it?
--Percy

This message is a reply to:
 Message 210 by Fred Williams, posted 07-12-2002 6:36 PM Fred Williams has not replied

  
Percy
Member
Posts: 22391
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 212 of 224 (13476)
07-13-2002 4:35 PM
Reply to: Message 210 by Fred Williams
07-12-2002 6:36 PM


Hi Fred!
Sorry to split this into two replies, but another thought came to me about one point you made:

Regardless, my point earlier was that even if your simulation was producing an algorithm, Terra already attempts this. But evolutionists themselves question the claims made by the Terra crowd and they will question yours. It’s just not what you think it is, Percy. If it was as you boldy claim, you would have a serious shot at the Nobel Prize!
I don't know who Terra or the "Terra crowd" are, or who the evolutionists you're thinking of are, but this all seems like the most obvious quackery. We create computer models of natural processes all the time. We know DNA is the blueprint for the organism, we understand how reproduction works at a genetic level, we know reproductive errors of various sorts occur, yet you somehow think creating a model of this process is so incredibly difficult and thorny a problem that not only have others struggled with it and failed, but that a solution is worthy of a Nobel Prize?
In reality, creating such a model is easy. It takes about an hour. Mine is a very simple model because it only addresses your specific point that random mutation cannot create new information. Creating such models only becomes rocket science to someone determined to believe the process being modeled is impossible.
--Percy

This message is a reply to:
 Message 210 by Fred Williams, posted 07-12-2002 6:36 PM Fred Williams has not replied

Replies to this message:
 Message 213 by gene90, posted 07-13-2002 6:07 PM Percy has replied

  
Percy
Member
Posts: 22391
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 214 of 224 (13479)
07-13-2002 7:11 PM
Reply to: Message 213 by gene90
07-13-2002 6:07 PM


Thanks, Gene, found it. Terra Nornia is just one of many worlds created for the Creatures game, which apparently has versions 1, 2 and 3, and none of which I know anything about. If there's a "Terra Crowd" using the Creatures game to do serious modeling of mutation and natural selection I couldn't find it. This seems more like a game than a serious simulation to me, more on the order of Sims. The websites are full of pictures like this:
--Percy
[This message has been edited by Percipient, 07-13-2002]

This message is a reply to:
 Message 213 by gene90, posted 07-13-2002 6:07 PM gene90 has replied

Replies to this message:
 Message 215 by gene90, posted 07-13-2002 11:10 PM Percy has not replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024