Register | Sign In


Understanding through Discussion


EvC Forum active members: 59 (9164 total)
5 online now:
Newest Member: ChatGPT
Post Volume: Total: 916,927 Year: 4,184/9,624 Month: 1,055/974 Week: 14/368 Day: 14/11 Hour: 2/1


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   Irreducible Complexity and TalkOrigins
Dr Adequate
Member (Idle past 315 days)
Posts: 16113
Joined: 07-20-2006


Message 9 of 128 (434455)
11-15-2007 8:44 PM
Reply to: Message 6 by TheWay
11-15-2007 12:33 PM


Tell Us How You're Quantifying "Information"
First, tell us how you're measuring how much "information" there is in the genome.
I think this is a good time to bring out the probability of survival in a mutant that has had a positive mutation. Assuming there is such a thing as a positive mutation as described by the Neo-Darwinian Theory. A mutant must have an above average offspring survival rate resulting in a higher selective value. The higher the selective value, the higher the chance that the mutant will survive to take over a population. Am I missing something?
That's more or less right.
So he suggests that even if the positive mutation creates a positive value for natural selection to occur, it only minutely raises the odds that it will be selected and further take over the population.
No ... not really.
The difference between a beneficial and a harmful mutation is like the difference between a guy with one lottery ticket and one with 10,000 lottery tickets. Neither of them is likely to win, but the odds overwhelmingly favor the guy with 10,00 tickets over the guy with one.
There is more, however it seems that odds such as these very slim, and very slim to happen around the 500 times, that G. Ledyard Stebbins Processes of Organic Evolution1966, predicted it would take to gain a new species.
The odds are slim for any particular mutation to achieve fixation. But there are a lot of mutations.
Spetner doesn't seem to put much stock in any other model of evolution other than copying errors to produce random mutations. He believes the other ways such as transposition fail to conclusively show their true randomness.
Transposition is a copying error.
For now though, How many copying errors would it take to produce an active cumulative evolution? Also, how much information would need to be added with each corresponding mutation?
It would be nice if any of that meant anything, but I'm afraid that it doesn't. See my remarks about "information".
What you mean by "how many copying errors would it take to produce an active cumulative evolution" I cannot guess.
In this divergence has it been shown to increase the amount of information in the genome? Also, what amount of divergence has been seen in a natural setting without the use of engineering? Also, is the embryo the only known place these can occur outside of physical tampering? How scientific would it be to assume that this process can create the neccessary level of complexity we see in an IC system and even the amount of information prior to its subsequent devolution?
"Information", again.
Are there any documented cases where a change causes a positive effect?
Yes.
Is the smallest change restricted to a single nucleotide?
Yes.
Also, is it that we know a change has no effect ...
Yes.
How probable is this in a natural setting and how likely, by your best estimates, is this to occur in a natural setting? Your examples make me wonder how much "evolution" is to credit and how much adaptation resulting from prior information in the genome is to credit?
I think you'll find that evolution is to be credited for evolution, and that adaptation is evolution.
Are you saying we shouldn't be skeptical of unobserved phenomena that claims itself a scientific fact?
That would depend on the degree of evidence for the unobserved phenomena. I, for example, am not skeptical of electrons, though they are unobserved, but I am skeptical of flying pigs (also unobserved).
Evidences of processes that could have well been established by a designer and interpreted poorly by the design ...
... have yet to be produced.

This message is a reply to:
 Message 6 by TheWay, posted 11-15-2007 12:33 PM TheWay has not replied

  
Dr Adequate
Member (Idle past 315 days)
Posts: 16113
Joined: 07-20-2006


Message 15 of 128 (436029)
11-24-2007 6:52 AM
Reply to: Message 10 by TheWay
11-23-2007 7:15 PM


Re: A few questions...
First I would very much like to hear what you think about genetic information. What is it? How can it be measured? Can it be measured?
That's what we asked you.
This is what BioPortal has as a definition for genetic information:
Trouble is, it includes the word "information". And it does not tell us how to quantify it.
I am unsure of why there is a controversy over "information" in the genome.
Because the ID crowd throw the word about without telling us how to quantify it.
However, I will give it issue and use what is in Spetner's book to try to explain what he thinks it is, and what I think he has related to me.
Still doesn't tell us how to quantify it.
This is, from what I can gather, Spetner's best definition of information storage as relating to the genotype.
Right, that's Shannon information. In which case any mutation which increases the size of the genome increases the amount of information in it; and any point mutation leaves the amount of information constant. In which case the creationist stuff about mutations and information is false.
Resistance of bacteria to antibiotics and of insects to pesticides. He states, "Some bacteria have built into them at the outset a resistance to some antibiotics." This resistance results from an enzyme that makes the drug inactive and this doesn't build up through mutation. He then cites J. Davies as proposing that the purpose of this particular type of enzyme had a completely unrelated primary function. Basically, a lucky side effect. He also cites a study done on antibiotics and how they are the natural products of "certain fungi and bacteria." Which we should expect to see some natural resistance to. Also non resistant bacteria can become resistant by picking up a resistant virus and the virus may have picked up the gene from a naturally resistant bacteria. Apparently, scientists can genetically modify organisms to become resistant.
Have a look at this experiment.
I think the similarity is good evidence of design
But why should the degrees of similarity be exactly in line with the predictions of evolution?
This article may help to clarify the issues.
Also, could you comment on how things like the sonar-like systems in bats and whales are similar yet aren't related to a common ancestor in the sense that these would have to have developed in both populations randomly?
This is exactly why scientists distinguish between analogy and homology.
Like UFO's? Some could argue using the same reasoning. It's a bit of a stretch, but hey might as well point it out.
The difference is that the observations supporting evolution are reproducible. We can go and look at the fossils or the DNA or the morphology of living creatures again and again and again, it's not like someone saying "I saw a flying saucer but it's gone now."
I just don't understand how randomness can create such highly organized complexity.
It didn't. Natural selection was also involved.
I think simple intuition is the greatest evidence for creation.
It is. It's also the greatest evidence for the Earth being flat.
I know I skipped some stuff, I wanted to focus on the information aspect (not sure I did such a good job). Maybe there are some resources I can pick up on information theory that would broaden my involvement in this topic.
Here is Claude Shannon's original paper on the subject --- this is where it all starts.

This message is a reply to:
 Message 10 by TheWay, posted 11-23-2007 7:15 PM TheWay has not replied

  
Dr Adequate
Member (Idle past 315 days)
Posts: 16113
Joined: 07-20-2006


Message 16 of 128 (436031)
11-24-2007 6:54 AM
Reply to: Message 11 by NosyNed
11-23-2007 7:37 PM


Re: Point mutations and information
But this seems enormously unlikely. For example: If you had a string in your DNA that went AAAAAAAAAAAAAAAA then it contains only (about - I'm not doing the calc.) 6 bits of information. (2 bits to pick the letter out of 4 and 4 more bits to give the number of repeats). If however there is a point mutation that puts a T in place of the 8th A we now have ( I think) 15 bits of information (2 bits for the first letter, 3 bits for it's repeat length then 2 more for the T, 1 bit for it's repeat, 2 more for the next A and 3 more for it's repeat length).
That would be Kolmogorov complexity. And yes, mutations can increase it, as you point out.

This message is a reply to:
 Message 11 by NosyNed, posted 11-23-2007 7:37 PM NosyNed has replied

Replies to this message:
 Message 18 by NosyNed, posted 11-24-2007 9:48 AM Dr Adequate has replied

  
Dr Adequate
Member (Idle past 315 days)
Posts: 16113
Joined: 07-20-2006


Message 19 of 128 (436074)
11-24-2007 10:55 AM
Reply to: Message 18 by NosyNed
11-24-2007 9:48 AM


Re: Komogorov vs Shannon
Maybe but I think it is also directly related to Shannon information too.
No, not really. Consider that we could easily come up with a finite Turing machine that generates pi (which contains an infinite amount of Shannon information) which is smaller than any Turing machine that produces some sufficiently large finite number.
How is KC calculated?
As a matter of fact, it's incomputable in the general case.
However, it is computable if the string is finite (as in genetics) and given a particular description language. You'd just have to try every description that's shorter than the explicit description of the given finite string, and see if any of them returns that string.

This message is a reply to:
 Message 18 by NosyNed, posted 11-24-2007 9:48 AM NosyNed has replied

Replies to this message:
 Message 20 by NosyNed, posted 11-24-2007 11:03 AM Dr Adequate has replied

  
Dr Adequate
Member (Idle past 315 days)
Posts: 16113
Joined: 07-20-2006


Message 23 of 128 (436090)
11-24-2007 12:20 PM
Reply to: Message 20 by NosyNed
11-24-2007 11:03 AM


Re: Shannon ?
So is there or is there not a specification for the Shannon info in AAAAAAAAAAAAAAAA and AAAAAAATAAAAAAAA?
What is the value of each?
They both contain 32 bits of information. To calculate that, you just have to multiply the length of the genome by 2.
And while we are at it can you calculate the difference in KC between the two strings above?
Strictly speaking, I should say "only relative to a given description language, which you haven't actually specified". There is no absolute value of Kolmogorov complexity without a specified description language.
But if we're not being technical and pernickety, the second string has more Kolmogorov complexity than the first, because in any reasonable description language (English, for example) you can give a briefer description of the first than the second.

This message is a reply to:
 Message 20 by NosyNed, posted 11-24-2007 11:03 AM NosyNed has replied

Replies to this message:
 Message 24 by NosyNed, posted 11-24-2007 1:34 PM Dr Adequate has replied

  
Dr Adequate
Member (Idle past 315 days)
Posts: 16113
Joined: 07-20-2006


Message 25 of 128 (436134)
11-24-2007 2:34 PM
Reply to: Message 24 by NosyNed
11-24-2007 1:34 PM


Re: Shannon ?
I am pretty sure that is wrong.
Why?
It's the logarithm, to the base 2, of the number of potential strings of that length.
16 bases, each with four potential values.
32 bits of information.

This message is a reply to:
 Message 24 by NosyNed, posted 11-24-2007 1:34 PM NosyNed has replied

Replies to this message:
 Message 26 by NosyNed, posted 11-24-2007 3:13 PM Dr Adequate has replied

  
Dr Adequate
Member (Idle past 315 days)
Posts: 16113
Joined: 07-20-2006


Message 28 of 128 (436174)
11-24-2007 3:44 PM
Reply to: Message 26 by NosyNed
11-24-2007 3:13 PM


Re: Shannon ?
Percy comments here that a random stream of bits has more information than others of the same length. If that is true then the two given strings do not have the same information content.
I believe you have to take the logn of the minimum number of bits required to transmit the stream not a redundant set of bits.
Percy comments here that a random stream of bits has more information than others of the same length. If that is true then the two given strings do not have the same information content.
I believe you have to take the logn of the minimum number of bits required to transmit the stream not a redundant set of bits.
Well now you need to present a definition of information.
If information is the number of bits, then the information in a genome is twice the number of bases.
If you're talking about Kolmogorov complexity, then it isn't.

This message is a reply to:
 Message 26 by NosyNed, posted 11-24-2007 3:13 PM NosyNed has not replied

  
Dr Adequate
Member (Idle past 315 days)
Posts: 16113
Joined: 07-20-2006


Message 35 of 128 (436367)
11-25-2007 10:31 AM
Reply to: Message 26 by NosyNed
11-24-2007 3:13 PM


Re: Shannon ?
I believe you have to take the logn of the minimum number of bits required to transmit the stream not a redundant set of bits.
But that would be relative to a particular description language, and we'd be back to Kolmogorov complexity.

This message is a reply to:
 Message 26 by NosyNed, posted 11-24-2007 3:13 PM NosyNed has not replied

  
Dr Adequate
Member (Idle past 315 days)
Posts: 16113
Joined: 07-20-2006


Message 38 of 128 (436436)
11-25-2007 7:26 PM
Reply to: Message 37 by NosyNed
11-25-2007 3:27 PM


Re: Thank you. I'm slow
Now can someone try again to explain how a point mutation always reduces the information?
Well, as your tone suggests, it doesn't.
However you define and quantify information, if some point mutation reduced the information, then the opposite point mutation would of course increase it.
Every mutation has its opposite. An insertion can be undone by a deletion, a point mutation by a point mutation, an inversion by an inversion, a frame shift by a frame shift, and so on.
And however you choose to measure information, if one mutation reduces the information in the genome, the opposite mutation must increase it. Otherwise you could have two identical DNA sequences containing different amounts of information, which would be nonsense.
Edited by Dr Adequate, : No reason given.
Edited by Dr Adequate, : No reason given.

This message is a reply to:
 Message 37 by NosyNed, posted 11-25-2007 3:27 PM NosyNed has not replied

Replies to this message:
 Message 39 by molbiogirl, posted 11-25-2007 7:29 PM Dr Adequate has replied
 Message 45 by TheWay, posted 12-01-2007 12:17 PM Dr Adequate has replied

  
Dr Adequate
Member (Idle past 315 days)
Posts: 16113
Joined: 07-20-2006


Message 40 of 128 (436462)
11-25-2007 10:37 PM
Reply to: Message 39 by molbiogirl
11-25-2007 7:29 PM


Thanks
I've never heard it put that way before.
That's because I thought up this line of argument myself.
* looks smug *

This message is a reply to:
 Message 39 by molbiogirl, posted 11-25-2007 7:29 PM molbiogirl has not replied

  
Dr Adequate
Member (Idle past 315 days)
Posts: 16113
Joined: 07-20-2006


Message 46 of 128 (437781)
12-01-2007 1:31 PM
Reply to: Message 45 by TheWay
12-01-2007 12:17 PM


Mutations and information, part 2
What would constitute an opposite point mutation?
Another point mutation. (See below.)
If a point mutation from A to T at a particular point on a genome reduced information, then the mutation back from T to A at the same point would be an increase in information.
Also, by this logic we could assume that no information would ultimately be removed or added.
Sharp.
In that case, though, in particular a deletion mutation would not remove information. In which case any series of deletion mutations would not remove information. In which case, there is no information in any genome. In which case it is not an objection to evolution to say that mutations can't create information, since there isn't any information.
Spetner's main idea, IMO, is that complexity as we see in various organisms have required information that the idea of the ToE cannot supply through mutations. Complexity requires complex information, which has not yet been shown to have been accumulated through natural processes, as the NDT had imagined.
Then Spetner is wrong.
Have a look here. Where does the "information" in these RNA species come from?
Could you elaborate? Or supply some material, I can't find anything.
Well, say you have a chuck of genome that goes
...CAG ACC GGT CGC...
and it undergoes a point mutation so that it now reads
...CAG ACC CGT CGC...
then if the second genome now has less information than the first, it follows that a point mutation from
...CAG ACC CGT CGC...
to
...CAG ACC GGT CGC...
would constitute an increase in information.
In the same way, if an insertion (and frame shift) from
...CAG ACC GGT CGC...
to
...CAG ACT CGG TCG C...
is a decrease in information, then a deletion (and frame shift) from
...CAG ACT CGG TCG C...
to
...CAG ACC GGT CGC...
would be an increase in information.
Since a mutation can be undone by another mutation, it follows that if there are mutations that decrease "information" (whatever that may be) then there are also mutations that increase it.
Edited by Dr Adequate, : No reason given.

This message is a reply to:
 Message 45 by TheWay, posted 12-01-2007 12:17 PM TheWay has replied

Replies to this message:
 Message 49 by TheWay, posted 12-02-2007 6:25 PM Dr Adequate has replied

  
Dr Adequate
Member (Idle past 315 days)
Posts: 16113
Joined: 07-20-2006


Message 47 of 128 (437792)
12-01-2007 2:13 PM


Muatations and information, part 3
Now lets look at it another way.
Since there are mutations that will change any base anywhere in the genome to any other base, and since there are mutations that will lengthen or shorten the genome, and since there are muations that will change chromosome number, it follows that there is a sequence (indeed, an infinite number of sequences) of mutations that will change any given genome into any other given genome. For example, the genome of a monkey into the genome of a man.
If none of these mutations increases "information" (whatever that is) it follows that evolution can get on just fine without ever increasing "information", and, indeed, we are in the case you mentioned earlier, in which "information" is conserved by mutation and hence there is no "information" in any genome.

  
Dr Adequate
Member (Idle past 315 days)
Posts: 16113
Joined: 07-20-2006


Message 51 of 128 (438117)
12-02-2007 8:47 PM
Reply to: Message 49 by TheWay
12-02-2007 6:25 PM


Re: Mutations and information, part 2
Are you not re-defining information from the abstract? If I laid out a 1 1 1 1 (four ones) sequence we could say that it is information because we can add these up to get something new like the number four. So abstractly a sequence can manifest a certain idea. Also, we can attach meaning to each (1) number and have it be represented accordingly. Furthermore, without a standard, we could no more say that it isn't information as it is information.
As far as I know, the phenotype reflects the genotype and is an expression thereof. So, in the abstract, I do not understand how you can say that there is no information other than redefining information to fit the evolutionary model.
You have missed my point so thoroughly that I can't figure out what you think my point was.
To recapitulate: you asked what I would say to someone who maintained that mutations always conserve information.
My reply was that if this is true, then since deletion is a form of mutation, it would follow that we could delete as much of the genome as we like (or all of it!) without destroying any information.
But this is possible only if there wasn't any information in the genome to start with.
Now, this does not involve me "redefining information" in any way: I am simply showing the logical consequences of any definition of "information" such that mutations conserve information.
Note also that the conclusion that no genome contains information is by no means part of "the evolutionary model". I dodn't claim that. But someone who claimed that all mutations conserve information would be forced by logic and the existence of deletion mutations to claim exactly that. Implicitly, someone who claims that mutations conserve information is claiming that.
---
We might state the result more formally.
Given any measure of genetic information such that two identical genomes contain the same amount of genetic information and such that the "null genome" (i.e. the "genome" consisting of no bases whatsoever) contains no genetic infomation, it follows that either no genome contains any information, or that it is possible for a mutation to increase information.
That was a tough paper to read.
You seem to have grasped it all right, though.
Is that cool or what?
More later, must eat pizza.
Edited by Dr Adequate, : No reason given.
Edited by Dr Adequate, : No reason given.

This message is a reply to:
 Message 49 by TheWay, posted 12-02-2007 6:25 PM TheWay has not replied

  
Dr Adequate
Member (Idle past 315 days)
Posts: 16113
Joined: 07-20-2006


Message 52 of 128 (438123)
12-02-2007 9:19 PM
Reply to: Message 50 by TheWay
12-02-2007 6:31 PM


Information
I doubt that Spetner knows if no one here knows.
Well, there are plenty of ways to measure information. For example, we could just measure the number of bits. That's easy, it's just twice the length of the genome. Or we could calculate the Kolmogorov complexity --- tricky, but feasible if you have the entire genome mapped.
But when a creationist says that "mutations only decrease information", he will never tell you how he's measuring information, 'cos if he gave any particular measurement, it would be trivial to prove him wrong. An insertion mutation, for example, would increase the number of bits; a duplication followed by a point mutation would increase Kolmogorov complexity; and so forth.
Edited by Dr Adequate, : No reason given.

This message is a reply to:
 Message 50 by TheWay, posted 12-02-2007 6:31 PM TheWay has not replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024