Register | Sign In


Understanding through Discussion


EvC Forum active members: 64 (9164 total)
2 online now:
Newest Member: ChatGPT
Post Volume: Total: 916,824 Year: 4,081/9,624 Month: 952/974 Week: 279/286 Day: 40/46 Hour: 0/2


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   What is an ID proponent's basis of comparison? (edited)
Percy
Member
Posts: 22499
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.9


Message 151 of 315 (516877)
07-27-2009 9:02 PM
Reply to: Message 133 by Smooth Operator
07-27-2009 3:23 PM


Smooth Operator writes:
Yes, I know about that. The problem is, that the algorithm itslef already had all the information to produce the design.
And you somehow know this without a copy of the program?
Genetic algorithms work in the same way as evolution. They're simply computational models of evolution. The reason you deny the possibility of random mutations is because they create new information, and the random mutations generated by genetic algorithms create new information in the same way. But GA's are not the topic of this thread. You should probably propose a new thread if that's what you want to talk about.
The information is already in the genome. The mechanisms that the cell has helps the cell adapt. It selects the best possible expression of already existing information.
What you describe has never been observed to happen. When under antibiotic stress the bacteria that survive experience a wide variety of different mutations. The bacteria that happened to receive resistance-conferring mutations survive and pass these mutations on to the next generation. It's the familiar process of descent with modification followed by selection of the organisms that will contribute to the next generation.
Wrong. I said it three times already. You are confusing the mutation repair and mutation inducing mechanisms. When LexA is turend ON there are mutations. When its turned OFF, there are no mutations.
No, I had it right, but I was insufficiently clear. The article "it" referred to the genetic repair mechanism, not the LexA.
--Percy

This message is a reply to:
 Message 133 by Smooth Operator, posted 07-27-2009 3:23 PM Smooth Operator has replied

Replies to this message:
 Message 161 by Smooth Operator, posted 07-28-2009 3:38 PM Percy has replied

Percy
Member
Posts: 22499
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.9


(1)
Message 152 of 315 (516879)
07-27-2009 9:29 PM
Reply to: Message 134 by Smooth Operator
07-27-2009 3:43 PM


Smooth Operator writes:
The NFL theorems say that no algorithm can otperform any other on the average unless it takes advantage of prior information about the target.
And what two algorithms are you comparing in the bacteria?
And since random chance doesn't create new information...
But creating new information is precisely what random chance does. Here's an example.
Consider a specific gene in a population of bacteria that has three alleles we'll call A, B and C. For lurkers not familiar with the term, alleles are variants of a single gene. One familiar example is eye color. The eye color gene has several alleles: brown, blue, green, etc. Human eye color depends upon which one you happen to inherit. Eye color isn't really this simple of course, but this hopefully gets the idea of alleles across.
So every bacteria in the population has either the A allele, the B allele or the C allele. We can calculate how much information is required to represent three alleles in this bacterial population. It's very simple:
log23 = 1.585 bits
Now a random mutation occurs in this gene during replication and the D allele appears. Through the following generations it gradually spreads throughout the population and becomes relatively common. There are now four alleles for this gene, A, B, C and D. The amount of information necessary to represent four alleles is:
log24 = 2 bits
The amount of information required to represent this gene in the bacterial population has gone from 1.585 to 2 bits, an increase of .415 bits, and an example of random chance increasing information.
--Percy

This message is a reply to:
 Message 134 by Smooth Operator, posted 07-27-2009 3:43 PM Smooth Operator has replied

Replies to this message:
 Message 162 by Smooth Operator, posted 07-28-2009 3:45 PM Percy has replied

Percy
Member
Posts: 22499
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.9


Message 170 of 315 (517038)
07-29-2009 4:00 AM
Reply to: Message 161 by Smooth Operator
07-28-2009 3:38 PM


Smooth Operator writes:
Because ALL algorithms work like that. Even the simple calculators. Can they give you a number that was not programmed into them? No, obviously not.
You've been misinformed. Here's a simple C++ program that multiplies two integers. There are no numbers programmed into it:
// Simple multiply program

#include 
#include 

using namespace std;

int main(int argc, char** argv) {

    string strA, strB;
    int intA, intB;

    cout << "Multiply two numbers" << endl;

    cout << "Enter number 1: ";
    cin >> strA;
    intA = strtol(strA.c_str(), NULL, 10);

    cout << "Enter number 2: ";
    cin >> strB;
    intB = strtol(strB.c_str(), NULL, 10);

    cout << "Result: " << strA << "*" << strB << " = " << intA*intB << endl;
}
If you have access to a C++ compiler then give it a try - it works, and as you can see, no numbers are pre-programmed in.
A genetic algorithm models evolution, just as meteorological programs model the weather, or NASA programs model the trajectories of spacecraft. The answers are not already programmed into these programs. What would be the point of writing a program to find an answer you already know?
Numbers are not programmed into simple calculators, either. Do you really believe that somewhere in your calculator is a "2" times table for all the possible numbers you can multiply by "2" and the answers, and a "3" times table for all the possible numbers you can multiply by "3" and the answers, and so on? Calculators and computers today use ALUs (Arithmetic Logic Units) that at their heart are just gates and flops implementing complex functions like multiplication from simpler functions like full adders. (Just for completeness I'll mention that there are tables of numbers involved for the proper representation and manipulation of certain standards, like the IEEE standard for fixed and floating point values.)
But genetic algortihms in a simulation were designed to find the specified target.
What would be the point of writing a program to find a solution you already know? The target of genetic algorithms is not specific. The solution is not known in advance, just as you presumably don't know in advance the product of two numbers you enter to the multiply program. Genetic algorithms are seeking a solution in the design space that satisfies specified parameters. They are a very effective method of exploring very large design spaces that couldn't be successfully explored using more random permutational techniques.
Evolution in real life has no knowledge about what it is looking for?
Yes, just like the genetic algorithms that model evolution. There's a set of parameters evolution seeks to satisfy that in the aggregate are equivalent to survival to reproduce, but it has no specific goal.
Are you saying that random mutations are able to produce Shannon information? Yes that is true. But not CSI.
CSI is just a concept made up by William Dembski. I can tell you how much information is in a stretch of DNA. If CSI had any reality then you could tell me how much CSI was in the same stretch, but you can't.
If CSI were real then ID scientists around the globe would be making new discoveries every year based upon the CSI concept, improving and extending our knowledge of our world and universe. Advances in the development of new drugs would be carried out by scientists applying the principles of CSI instead of evolution. The next generation of scientists would be flooding to Bible colleges and the Discovery Institute so they'd have the best chance of winning the Nobel Prize. And William Dembski would himself receive the Nobel Prize, be knighted by the queen, and receive world-wide approbation.
Instead Dembski is a professor at Southwestern Baptist Theological Seminary in Fort Worth, Texas, where he teaches courses in its Department of Philosophy of Religion, and CSI has no standing within the scientific community whatsoever because in truth it is just a prop invented to give a scientific-looking veneer to what at heart is just the religious concept of special creation by God.
Actually we are talking about the same thing, only in different terminology.
I don't think so. Your position is resistance-conferring mutations are a deterministic result of the presence of antibiotics. My position is that resistance-conferring mutations are the ones selected from the millions of mutations that actually occur.
Than why doesn't my article say that LexA has to be turned on for resistance to be acquired?
Do you mean oi/10.1371/journal.pbio.0030176]-->Inhibition of Mutation and Combating the Evolution of Antibiotic Resistance? I don't see anywhere in the paper where it refers to LexA turning on and off. It talks about LexA derepressing the SOS response mechanism when cleaved. LexA turning on and off is terminology you invented yourself.
--Percy

This message is a reply to:
 Message 161 by Smooth Operator, posted 07-28-2009 3:38 PM Smooth Operator has replied

Replies to this message:
 Message 174 by PaulK, posted 07-29-2009 7:50 AM Percy has replied
 Message 181 by Smooth Operator, posted 07-30-2009 4:33 AM Percy has replied

Percy
Member
Posts: 22499
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.9


Message 171 of 315 (517041)
07-29-2009 4:33 AM
Reply to: Message 162 by Smooth Operator
07-28-2009 3:45 PM


Smooth Operator writes:
quote:
And what two algorithms are you comparing in the bacteria?
A random search and an evolutionary algorithm.
Evolution already performs a random search because mutations are random. How is your random search different from evolution?
You are wrong because youa re using the wrong definition of information.
I'm using Shannon information.
All of the necessary information was already there.
You think the information for allele D was already there? Where was it then?
The reason you can't answer that question is because allele D was caused by a random change (a mutation) to one or more nucleotides of allele A, B or C. It didn't exist before the mutation occurred. It appeared out of thin air, created by random chance.
You can't use Shannon's information and apply it to biological information because it only concerns itslef with statistical aspect of information. It still has to take into account syntax and semantics.
Shannon information can be applied to anything in the real world, including DNA. In evolution the information problem is one of how to reliably communicate the specific set of messages contained in the DNA to the next generation. All the alleles of all the genes of a population form the complete message set, and each individual in the population possesses a specific subset of that message set that it needs to communicate to offspring during reproduction. Any errors in communication of this DNA message to offspring are retained by the offspring and become part of the population's collective genome, making the message set larger and increasing the amount of information.
Semantics are irrelevant in information theory.
--Percy
Edited by Percy, : Decided not to comment about SM's next to last paragraph.

This message is a reply to:
 Message 162 by Smooth Operator, posted 07-28-2009 3:45 PM Smooth Operator has replied

Replies to this message:
 Message 182 by Smooth Operator, posted 07-30-2009 4:39 AM Percy has replied

Percy
Member
Posts: 22499
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.9


Message 175 of 315 (517054)
07-29-2009 9:10 AM
Reply to: Message 174 by PaulK
07-29-2009 7:50 AM


Re: Three failures of CSI
Thanks for the detailed critique!
Sometime I feel that giving CSI and Dembski's other ideas this kind of serious attention dignifies it far beyond what it deserves. Dembski has draped CSI in mathematical trappings, but it is in essence just a made up idea (and an unoriginal one at that) constructed with no testing against reality. In his books he never presents any actual research data, he makes things up (law of conservation of information, the probability bound, inclusion of semantics as a facet of information theory), he never shows how CSI can actually be calculated (anyone know what the units of CSI are?), and he never points to any successful predictions.
--Percy

This message is a reply to:
 Message 174 by PaulK, posted 07-29-2009 7:50 AM PaulK has replied

Replies to this message:
 Message 176 by PaulK, posted 07-29-2009 9:36 AM Percy has seen this message but not replied

Percy
Member
Posts: 22499
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.9


Message 194 of 315 (517328)
07-31-2009 7:37 AM
Reply to: Message 181 by Smooth Operator
07-30-2009 4:33 AM


Smooth Operator writes:
You gave it all the information it needed.
I gave the multiply program the same information any human would have. The program is producing the same information that a person multiplying two numbers together would produce. The program could even be modified to model a person carrying out multiplication by hand using pencil and paper. If a person multiplying two numbers together is producing new information, then so is a computer.
You misunderstood me. I didn't exactly mean that ALL the numbers are programmed in. The algorithms for those numbers are programmed in. YOu gave the computer enough information to process it to get the desired result. If you didn't it would give you no result.
Concerning GA's, of course the algorithm is "programmed in." GA's model evolution, so of course an algorithm that models evolution is "programmed in." The random mutations of evolution are modeled by random changes to design parameters. Natural selection is modeled by an assessing algorithm. Reproduction is modeled by randomly "mating" design alternatives and randomly combining their design parameters.
The point is for the computer to do the boring job of calculation faster than you. It is given a search space, that people are too boring to search themselves. All the answers are already there, but to find them we have to do a lot of calculations to find them. That is why we use computers. To do the dirty work, so to speak.
Are you saying that before a design team even gathers that the solutions are already there, that they just have to find them? This is a much more sweeping argument than you were making before. In effect you're saying that neither computers nor people produce new information. Apparently for you the solutions are already out there just floating around somewhere waiting to be discovered.
I think you're confusing the potential to produce a design with the design itself when you say the solutions already exist, and that the designers task is just a matter of finding them. The multiply program I provided as an example has the potential to solve many multiplication problems, but that doesn't mean the answers already exist. When you run the program and enter two numbers, you get a result you didn't know before. New information has been created for you.
And all of those parameters are put in by an intelligence. If there were no initial parameters, the algorithm would do no good.
The initial parameters are part of the model. Just as evolving bacteria in a laboratory experiment have initial conditions, so must any computer model of evolution. The evolutionary model must have access to the same information (or at least a reasonable approximation , or analogous information in the case of GA's) as the real world. The principles of modeling the real world are the same regardless of whether one is modelling the weather or evolution.
This is just silly. If you read the No Free Lunch by Dembski you will se he calculated the CSI for a flagellum.
I'd love to see this calculation. Could you please provide it?
quote:
Instead Dembski is a professor at Southwestern Baptist Theological Seminary in Fort Worth, Texas, where he teaches courses in its Department of Philosophy of Religion, and CSI has no standing within the scientific community whatsoever because in truth it is just a prop invented to give a scientific-looking veneer to what at heart is just the religious concept of special creation by God.
This is no more than slander. If you look up Dembski at wikipedia you will see more than a philosophy degree.
I think that if you look up Dembski at Wikipedia you'll find that what I said was true. He really is a professor at Southwestern Baptist Theological Seminary in Fort Worth, Texas, and he really does teach courses there in its Department of Philosophy of Religion. And gee, Wikipedia says the exact same thing!
And please do look up Biologic institute where ID science is being done. Just because you don't know about it, doesn't mean it isn't there.
Biologic Institute
If you think there's relevant research from the Biologic Institute then please just enter it into the discussion.
Cleaved, or uncleaved, turned on or off, call it what you will. It's talking about interfering with it's activity.
But if you're going to invent your own lingo you have to tell people what it means. In this case there's no way to know whether your "turned on" corresponds to cleaved or uncleaved.
--Percy
Edited by Percy, : Grammar.
Edited by Percy, : No reason given.

This message is a reply to:
 Message 181 by Smooth Operator, posted 07-30-2009 4:33 AM Smooth Operator has replied

Replies to this message:
 Message 200 by Smooth Operator, posted 08-01-2009 9:35 PM Percy has replied

Percy
Member
Posts: 22499
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.9


Message 195 of 315 (517330)
07-31-2009 8:03 AM
Reply to: Message 182 by Smooth Operator
07-30-2009 4:39 AM


Smooth Operator writes:
quote:
Evolution already performs a random search because mutations are random. How is your random search different from evolution?
It's not. That's the problem for you.
You were saying the NFL theorem says that evolution can be no better than random search. Just repeating that claim is no help to me. In order for me to assess your claim you need to provide a general idea of how random search differs from evolution.
quote:
I'm using Shannon information.
Which can't be used for biological functions.
Sure it can. In Shannon information the problem of communication can be reduced to reproducing at one point a message from a set of messages at another point. Everything that happens in the universe can be interpreted this way.
quote:
Shannon information can be applied to anything in the real world, including DNA.
No, it can not, because it deals only with statistical aspect of information.
This is untrue, but the real question is why you believe that statistical approaches are excluded from the biological realm.
No, this is wrong. Thi has never been observed. It is true that mistakes happen, and that they get passed on. But it is not true that informational content increases. It can only degrade, over time.
I understand that you accept the claims of people like Dembski, Abel and Trevors, but you need to go beyond just repeating their claims. I provided an example of how the amount of information in a population is increased by random mutation. If you think I was incorrect then you have to go beyond just stating I'm wrong. You have to show how I'm wrong. Here's the example again:
Consider a specific gene in a population of bacteria that has three alleles we'll call A, B and C. For lurkers not familiar with the term, alleles are variants of a single gene. One familiar example is eye color. The eye color gene has several alleles: brown, blue, green, etc. Human eye color depends upon which one you happen to inherit. Eye color isn't really this simple of course, but this hopefully gets the idea of alleles across.
So every bacteria in the population has either the A allele, the B allele or the C allele. We can calculate how much information is required to represent three alleles in this bacterial population. It's very simple:
log23 = 1.585 bits
Now a random mutation occurs in this gene during replication and the D allele appears. Through the following generations it gradually spreads throughout the population and becomes relatively common. There are now four alleles for this gene, A, B, C and D. The amount of information necessary to represent four alleles is:
log24 = 2 bits
The amount of information required to represent this gene in the bacterial population has gone from 1.585 to 2 bits, an increase of .415 bits, and an example of random chance increasing information.
All you have to do is point out the error.
--Percy
Edited by Percy, : Spelling.

This message is a reply to:
 Message 182 by Smooth Operator, posted 07-30-2009 4:39 AM Smooth Operator has replied

Replies to this message:
 Message 201 by Smooth Operator, posted 08-01-2009 9:48 PM Percy has replied

Percy
Member
Posts: 22499
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.9


Message 207 of 315 (517694)
08-02-2009 6:55 AM
Reply to: Message 200 by Smooth Operator
08-01-2009 9:35 PM


Smooth Operator writes:
But your program is using the already programmed in instructions from the computer.
I think you meant to say that the instructions come from people, right?
But a person performing multiplication using pencil and paper is just following the instructions he learned in fifth grade. There's no difference between a person following an algorithm and a computer following an algorithm when it comes to creating new information.
The implication of your position is that no new information has been created from performing multiplication since someone first figured out how to do it. That inventor of multiplication created new information, and everyone since has just been following instructions. And even the inventor of multiplication was just taking advantage of information he was taught by others before him and merely economized by showing how people could perform multiplication of many digits just by memorizing the times table for single digits, so he didn't create new information, either.
Obviously that's an unworkable definition of new information.
Shannon defined the problem of communication as one of replicating at one point a message from a set of messages originating from another point. When a message from the message set is sent from point A to point B then information has been communicated.
So it works like this. A person sending you messages from his message set (his personal store of knowledge that he keeps in his brain) is adding to your own personal message set every time he tells you something you didn't already know. For you, everything you didn't already know is new information. You add it to your personal message set, and now this becomes a message that you can send to someone else.
So let's say you're chatting online with someone who tells you that 17x26 is 442. This is new information for you. You could have easily have figured it out yourself, but you didn't, so your online friend has now added information to your message set. Your message set has increased in size. For the length of time that you remember that 17x26 is 442, this is a message that you can pass on to others, thereby increasing their personal message sets.
But it makes no difference where the message that 17x26 is 442 came from. If you had instead used your calculator you would have still added new information to your message set. In other words, it doesn't matter if the new information came from a person or an object. For all you care the clouds could have formed into the equation "17x26=442" in the sky and it would still represent new information for you.
In other words, the creation of new information doesn't mean that the same information hasn't been created before. It would make no sense to say that of two independent inventors who create the same invention with no knowledge of the other's work, that the inventor who completed the invention first created new information and the other did not.
So new information is everything you learn that you didn't already know. The source of the information is irrelevant.
All that remains is to add to this the fact that information is sent and received by everything everwhere in existence. In other words, the sharing and creation of information is not a special trait of human beings. It is possessed by all matter everywhere.
Ah, but here comes the problem. Evolution has no knowledge of the search target. Therefore it's as useful as blind chance.
This is half correct. Mutation has no knowledge of any "search target," but selection is the very opposite of random. The best adapted survive and contribute their genes to the next generation, including any mutations they might have. That's why white rabbits evolve in the arctic and not the rain forest. If evolution were truly random then white rabbits could evolve anywhere.
So we're back to the same question. You cited the NFL theorem which holds that one algorithm cannot perform better than another algorithm unless it has more information. So you're talking about two different algorithms, one that you call "evolution," and the other that you call "random". How does the "evolution" algorithm differ from the "random" algorithm?
No, it hasen't it has only been processed by the algorithm you produced. All the relevant information to produce it was already in there. The whole search space was in there from the start. You just optimized an algorithm to find it faster than blind chance.
So the whole search space is there from the start, and if designers search the search space and find a solution, then that is new information. And if a computer searches the search space and finds a solution, then that's not new information.
Your position keeps knocking into contradictions.
quote:
"... no operation performed by a computer can create new information."
Look, no operation by a computer can create new information. It's a well known fact.
The Evolutionary Informatics Lab - EvoInfo.org
But you can't just cite Mr. Robertson. You have to understand why Mr. Robertson said this and explain here why I'm wrong. Otherwise I can go off and search the web for quotes of people saying that computers *can* create new information. The purpose of discussion isn't to make arguments from authority, otherwise we'll end up arguing who cited the best authority. The goal is to actually understand what you're debating to the point where you can make the arguments yourself.
But since you offered a bare reference with no argument I will do the same. Read this rebuttal from What is thought? by Eric B. Baum, especially the part beginning in the middle of page 429 and that concludes like this:
Eric B. Baum writes:
And how did the information come into the DNA program? Through evolution, which potentially reflects copious information, perhaps 1035 bits of feedback.
Moving on:
quote:
I'd love to see this calculation. Could you please provide it?
It's in the book. But I did manage to find an online version.
It'se from pages 289 - 302.
Dembski - No Free Lunch
You refer me to a Google Books page in Croatian? That doesn't work?
If you have an argument to make about CSI based upon Dembski's book No Free Lunch, could you please enter the argument into the discussion in your own words?
The point remains that you didn't mention his other education degrees. Like these:
I didn't mention any of Dembski's degrees. The point is that scientists aren't producing advances based upon CSI, not even Dembski who is working as a professor at a Bible college where he teaches courses in the philosophy of religion. If you think the Biologic Institute is producing evidence of CSI, then I think it would be highly relevant to this discussion if you would tell us about it.
The article says that they interfer with the working of LexA and the evolution of resistance stops.
You're just stating your original position again.
I have no idea why the authors of the article chose to overstate the point. Obviously evolution does not stop. There is no process that can make the copying of genetic material perfect.
--Percy

This message is a reply to:
 Message 200 by Smooth Operator, posted 08-01-2009 9:35 PM Smooth Operator has replied

Replies to this message:
 Message 212 by Smooth Operator, posted 08-02-2009 12:04 PM Percy has seen this message but not replied

Percy
Member
Posts: 22499
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.9


Message 208 of 315 (517698)
08-02-2009 7:35 AM
Reply to: Message 201 by Smooth Operator
08-01-2009 9:48 PM


Smooth Operator writes:
quote:
You were saying the NFL theorem says that evolution can be no better than random search. Just repeating that claim is no help to me. In order for me to assess your claim you need to provide a general idea of how random search differs from evolution.
It doesn't. They give you the same results on average.
That they give the same results is what you claim the NFL theorem tells us about the two different algorithms, "random" on the one hand and "evolution" on the other. How do these two algorithms differ in their definition? I know how evolution works. How does this "random" algorithm that your contrasting evolution with work?
Uh, no. I cited the articelw here it says that it can't. Did you miss it? If his model does not account for semantics than it can't be used to measure biological function.
I think you're confusing what a gene does with meaning. Meaning and semantics are a human interpretation. Semantics cannot be quantified, is not part of information theory, and isn't even relevant. This goes back to Shannon's original paper, A Mathematical Theory of Communication:
Shannon writes:
Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.
This is as true today as it was then.
quote:
This is untrue, but the real question is why you believe that statistical approaches are excluded from the biological realm.
Oh, but it's very true.
Yes, I know you believe this, but can you support your position with evidence and arguments? You're most common responses to everyone seem to be variants of either, "No, I'm right," or "No, you're wrong."
Population genetics is an extremely statistical science, and this flatly contradicts your position.
Almost all medical studies are statistical in nature, and this also flatly contradicts your position.
Need I go on providing examples?
But my original reason for responding was to point out you were wrong to say that Shannon information "deals only with statistical aspect of information" in your Message 182. Are you talking about the quantification of information? Not statistical. Are you talking about the introduction of noise into communication? Very statistical. In other words, Shannon information has both statistical and non-statistical aspects. Like many things. I thought the Widipedia article made this pretty clear.
So statistical approaches are appropriate in the biological realm. Indeed, where wouldn't statistical approaches be appropriate? Statistics is a tool (among many) that one can probably apply to virtually any problem.
Oh, you mean that D appears. Well, in that case, such a thing has never been observed.
Mutations not currently present in a population have never been observed? Could you please return to reality?
Furthermore having more genes does not equal more information.
My example was the addition of a single allele to a pre-existing gene, but gene duplication adds even more information. Let's go back to Shannon again, saying what I've already said, but I want to show you that I've been accurately describing information theory:
Shannon writes:
The fundamental problem of communication is that of producing at one point either exactly or approximately a message selected at another point...The significant aspect is that the actual message is one selected from a set of possible messages.
So if we increase the number of alleles in a gene from 3 to 4, the amount of information in the message set rises from 1.585 bits to 2 bit, an increase of .415 bits.
You evidently thought I was talking about gene duplication when I was actually talking about a single mutation causing the addition of an allele, but let's talk about gene duplication using your example.
First you have this gene:
My house is big.
Then there's gene duplication and you have this:
My house is big.
My house is big.
We can argue about whether this represents more information or not, but we don't need to. Now the duplicated gene experiences a mutation and we get this:
My house is big.
My mouse is big.
And then another mutation:
My house is big.
My mouse is bit.
And another:
My house is big.
My mouse is lit.
And so on, every change creating new information. And assuming there was reproduction involved, this new gene now has the alleles "My house is big," "My mouse is big," "My mouse is bit" and "My mouse is lit." That's quite a bit of new information in the population.
--Percy
Edited by Percy, : Improve formatting.

This message is a reply to:
 Message 201 by Smooth Operator, posted 08-01-2009 9:48 PM Smooth Operator has replied

Replies to this message:
 Message 214 by Smooth Operator, posted 08-02-2009 12:24 PM Percy has replied

Percy
Member
Posts: 22499
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.9


Message 232 of 315 (517892)
08-03-2009 6:53 AM
Reply to: Message 227 by Parasomnium
08-02-2009 4:00 PM


Parasomnium writes:
traderdrew writes:
You are correct and I was wrong.
We don't often see this from creationists/ID-proponents. Well done.
Whoa, whoa, whoa, there, back the buggy up.
We don't often see this from anyone on either side.
--Percy

This message is a reply to:
 Message 227 by Parasomnium, posted 08-02-2009 4:00 PM Parasomnium has replied

Replies to this message:
 Message 243 by Parasomnium, posted 08-03-2009 12:30 PM Percy has seen this message but not replied

Percy
Member
Posts: 22499
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.9


Message 233 of 315 (517893)
08-03-2009 7:05 AM
Reply to: Message 230 by kongstad
08-03-2009 1:03 AM


kongstad writes:
Lets keepit simple. By your claim the string 1 has the same information as the string 11. But then the string 1111 has the same content as 1 no?
I think we have to make sure how SO is really thinking about this. He may be saying that sending the message "MY HOUSE IS BIG" twice communicates no more information than sending it once. It's a little difficult to tell since there are so many details he doesn't make explicit.
Also, SO's "MY HOUSE IS BIG" example has the potential for creating confusion because it is not only a message, it's a message with meaning, and one of the way's that SO misunderstands information theory is that he thinks it includes semantics, apparently because he's been listening to Dembski, Gitt and Spetner.
I'll post a more detailed reply to SO when I have time.
--Percy
Edited by Percy, : Got a name wrong, "Werner" => "Spetner"

This message is a reply to:
 Message 230 by kongstad, posted 08-03-2009 1:03 AM kongstad has replied

Replies to this message:
 Message 235 by Wounded King, posted 08-03-2009 7:17 AM Percy has seen this message but not replied
 Message 236 by kongstad, posted 08-03-2009 8:16 AM Percy has replied

Percy
Member
Posts: 22499
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.9


Message 234 of 315 (517895)
08-03-2009 7:13 AM
Reply to: Message 231 by Wounded King
08-03-2009 3:26 AM


Wounded King writes:
I know this is open to interpretation but wasn't it rather that the creationists went straight to the schools, through the school board, and that the actual case was bought by a parent unhappy with the teaching of ID?
About being open to interpretation, I'm not sure how since the facts of how the case came to court are pretty clear, and they're pretty much the same as all other previous cases. Creationists persuade a school board or a legislature to create a policy or law advancing the cause of creationism in the public schools, and acting out of concern for science education parents bring suit on the basis of separation of church and state.
--Percy

This message is a reply to:
 Message 231 by Wounded King, posted 08-03-2009 3:26 AM Wounded King has replied

Replies to this message:
 Message 238 by Wounded King, posted 08-03-2009 8:27 AM Percy has seen this message but not replied

Percy
Member
Posts: 22499
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.9


Message 241 of 315 (517921)
08-03-2009 10:30 AM
Reply to: Message 236 by kongstad
08-03-2009 8:16 AM


kongstad writes:
Now The nurse might come out the door and say "It's a boy", a message which is 10 characters long, but they could have agreed on a protocol, such that she just displayed either a black or white piece of paper in the window in the door, white for boy and black for girl.
The information content would be the same for the father. But this is just because the set of possible messages is exactly 2, so either way she could at most communicate 1 bit of information.
Yes, exactly!
In case it helps SO, let me repeat what you just said in a slightly different way:
The way that information theory looks at this is that the nurse needs to communicate a message from a set of messages. In this case the set is closed (finite), and the messages of that set are:
  • It's a boy
  • It's a girl
The size of the message set is 2, so the number of bits necessary to communicate this information is:
log22 = 1 bit
It doesn't matter how many words are actually used to communicate this message ("We'd like to congratulate you on the birth of a son..."), you're still only communicating a single bit of information.
One might note that the message set is insufficient for describing the full range of possibilities, and that in reality we need an open at at least a larger set:
  • It's a boy
  • It's a girl
  • It's two boys
  • It's two girls
  • It's a boy and a girl
  • It's three boys
  • ...
--Percy
Edited by Percy, : Typo.

This message is a reply to:
 Message 236 by kongstad, posted 08-03-2009 8:16 AM kongstad has not replied

Percy
Member
Posts: 22499
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.9


Message 247 of 315 (517971)
08-03-2009 3:14 PM
Reply to: Message 214 by Smooth Operator
08-02-2009 12:24 PM


Hi SO,
You have a couple significant misconceptions about information theory that have to be addressed. You incorrectly believe that:
  1. In information theory, information can only come from a mind, an intelligence.
  2. In information theory, information includes meaning.
Taking these in order, let's examine your belief that information can only come from a mind or intelligence. Say I show you a flower and ask you to write down on a piece of paper how many petals it has. Once you've written that information down, where would you say the information came from. You'd answer that you created that information.
Now let's say I show you the same flower, but I ask you to close your eyes and write down how many petals it has. You'll respond that you can't do that, you can only guess, that you'll have to see the flower before you can write down how many petals it has.
Therefore, the information about the number of petals doesn't come from you, it comes from the flower. It turns out you didn't really create any information at all. It was new information to you, but you didn't create the information. Rather, the information was communicated to you via electromagnetic radiation (light).
We can even go beyond this to an example that doesn't involve people at all. How does a flower know to open it's petals in the morning? It knows because the rays of sun communicate to the flower that the sun has risen and day has begun. No mind or intelligence was involved.
We can just as easily create examples that don't involve life at all. A pool of water receives information from the sun in the form of electromagnetic radiation and heats up.
It would be very convenient for your position if information were something that could only be created by a mind or intelligence, but that's not how it is defined in information theory. The problem of communicating information is one of sending one message from a set of possible messages from point A to point B. There's nothing in information theory about message sets only being created by minds, or that only minds can send and receive information.
Now let's examine your belief that information theory includes meaning. Most fundamental of all is the statement of Shannon himself that meaning is irrelevant to information, where we of course mean information in the formal sense that it is used in information theory. In his paper A Mathematical Theory of Communication Shannon wrote:
Shannon writes:
Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.
I think I've quoted this to you several times now. I know that Spetner and Gitt claim that meaning is part of information theory, but their ideas have not had any influence at all within science. Their only audience is creationists. Their ideas are not underpinned by research and do not have any mathematical foundation.
But it isn't just that their ideas have been ignored by science, a simple thought exercise can convince anyone that meaning cannot be quantified. Just think about it. How would you quanity meaning? How much meaning is there in a pebble? A tree? The Mona Lisa? There's no answer. Meaning is an interpretation people make and it is subjective.
What would it mean to have an increase in meaning? How would you add to the meaning of the Mona Lisa? Does the Mona Lisa have more or less meaning than the ceiling of the Sistine Chapel?
Or does the Mona Lisa have more meaning than a human being? Yes? No? Whatever your answer, how did you quantify the meaning so you could do the comparison?
Or does the Mona Lisa have more meaning than a chipmunk? Than a fly? Than a bacterium? How would you ever make the comparison?
Information theory is a very mathematical science, and our inability to quantify meaning, indeed to objectify it in any way, leaves meaning forever outside its realm.
Say you look within the cell to its DNA and think you find meaning there. What is the meaning that you find? Do you find love? Peace? Tranquility? Is the meaning that you find the same meaning that everyone else finds, which is required if there is any objective quality there?
The answer is no, of course not, you do not find this kind of meaning there. Meaning is subjective and can't be quantified. Everyone sees a different meaning. Some people see Jesus in a slice of pizza and find it an incredibly meaningful miraculous event, others shrug their shoulders and finish lunch.
What you're calling meaning inside DNA is actually no more than what it does. Coding portions of DNA specify sequences of amino acids to be strung together into proteins. For example, the portion of DNA specifying the amino acid sequence for the common protein hemoglobin has no meaning. It's just a specification. There's no meaning.
But DNA has plenty of information.
You have a few other misconceptions, for example:
No, it's actually totally correct. Since natural selection also has no knowledge of the search target. It does not know what function to select for. So the result is the same as blind chance.
There is no specific "search target". Whatever increases the chances of reproductive success will be selected. Natural selection is not random.
And as for the rabbits, that's probably an epigenetic factor.
That snowshoe hares turn white in winter is under genetic control. The trait of winter color change to white does not evolve in temperate climates. You can bring brown rabbits north to the Arctic, but they won't turn white in winter.
Polar bear are perhaps a more clear example, since their fur is always white. White bears do not evolve in temperate climes.
An even better example is the difference in fur color between Arctic and Antarctic baby seals. In the Arctic where there are more predators, especially polar bears, the baby seals of resident species have white fur. In the Antarctic where there are few surface predators, baby seals of resident species have dark fur.
And this is because natural selection is not random. Natural selection means that poorly adapted organisms die or produce fewer offspring, while well adpated organisms survive and produce more offspring. The biological world is continually getting more of what works and less of what doesn't. It isn't random.
quote:
I'd love to see this calculation. Could you please provide it?
It's in the book. But I did manage to find an online version.
It's from pages 289 - 302.
Dembski - No Free Lunch
You refer me to a Google Books page in Croatian? That doesn't work?
I'm sorry but it works for me.
The page displays, but the box for the text of the book is blank. But I played with it a bit more, and if you click on the right arrow of the pair labeld "Naslovnica" then it brings you to the table of contents. Click on the link for me - the same is true for you, right? Blank page, you have to click on that right arrow before any text appears?
Anyway, going to page 289 I find chapter section 5.10 titled "Doing the Calculation". It's actually much more than four pages. The first equation doesn't even appear until 297. If you think that Dembski has a method for calculating specified complexity, please describe it here in your own words.
quote:
That they give the same results is what you claim the NFL theorem tells us about the two different algorithms, "random" on the one hand and "evolution" on the other. How do these two algorithms differ in their definition? I know how evolution works. How does this "random" algorithm that your contrasting evolution with work?
It picks sequences randomly.
So your random algorithm works like this: there's a mutation, and whether or not the mutation makes it to the next generation is random.
And evolution works like this: there's a mutation, and whether or not the mutation makes it to the next generation is a function of how well adpated the organism is to its environment.
This is consistent with the NFL theorem, because evolution takes more information into account than the random algorith. Evolution includes information about the environment while your random algorithm does not.
quote:
Semantics cannot be quantified,
Of course it can. CSI does it perfectly. So do Abel and Trevors with their FSC.
Could you provide an example of quantifying semantics. For example, how much semantic information is in the sentence, "My house is big?" And what are the units of semantic information.
That's because it's tiresome to constantly have to be repeating the same thing over and over again.
Imagine how tiresome it is to have to actually explain something over and over again. You should try that for a change!
There you see. This is a prime example why this discussion is getting boring. You totally and completelly misunderstood me. You don't know what I meant by the word statistical. No the approach that statisticians use!
I meant the number of entities in a system. For an example, the number of bits in information used to convey a message.
By statistics you mean the number of bits required to convey a message? Statistics is the realm of probabilities and so forth. At heart the number of bits required to convey one message from a finite set of messages is deterministic and neither statistical nor probabilistic. Information theory can be very statistical, but not for this very simple portion of it.
I've presented you calculations of the number of bits required to transmit a message several times, and you should address yourself to these calculations since that's what you claim you're talking about. The example is one of a gene of 3 alleles experiencing a mutation to then have 4 alleles. The message set for that gene has grown from 3 to 4, and the number of bits necessary to communicate a message from that message set has changed in this way:
log23 = 1.585 bits
log24 = 2 bits
2 bits - 1.585 bits = .415 bits
Information has increased by .415 bits
Again, you misunderstood me. That's why this discussion is boring. I said that there are no cases of mutations producing new biological functions.
If you're going to issue complaints like this be sure you're looking in a mirror when you make them. What you said was:
Oh, you mean that D appears. Well, in that case, such a thing has never been observed.
I can only go by what you said, which was that D appearing (a new mutation appearing) has never been observed.
In trying to understand someone, one usually tries harder to make sense of people who have a history of saying sensible things. But you haven't been making much sense here, nor explaining very much either, and in another thread you're arguing for geocentrism, so when you appear to be saying something nonsensical like that new mutations have never been observed, then you've got to expect that as wrong as that sounds that people are going to assume you meant precisely what you appeared to be saying.
In other words, when you build a reputation for saying outlandish things, don't expect that people will be spending much effort looking for sense in the nonsense.
Still bored?
--Percy
Edited by Percy, : Fix quote.
Edited by Percy, : Got a name wrong, "Werner" => "Spetner"

This message is a reply to:
 Message 214 by Smooth Operator, posted 08-02-2009 12:24 PM Smooth Operator has replied

Replies to this message:
 Message 259 by Smooth Operator, posted 08-03-2009 8:41 PM Percy has replied

Percy
Member
Posts: 22499
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.9


Message 255 of 315 (518029)
08-03-2009 7:38 PM
Reply to: Message 252 by Wounded King
08-03-2009 5:08 PM


Re: Cliche understanding failure
Wounded King writes:
Or if you prefer to be the kicker, it would be unfair surely to adjust your opponents goal posts to be twice as wide apart as your own?
I did a double take. Where I come from the opponents would be delighted to have the goals widened.
--Percy
Edited by Percy, : Grammar.

This message is a reply to:
 Message 252 by Wounded King, posted 08-03-2009 5:08 PM Wounded King has not replied

Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024