Register | Sign In


Understanding through Discussion


EvC Forum active members: 64 (9164 total)
4 online now:
Newest Member: ChatGPT
Post Volume: Total: 916,895 Year: 4,152/9,624 Month: 1,023/974 Week: 350/286 Day: 6/65 Hour: 1/3


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   What is an ID proponent's basis of comparison? (edited)
Smooth Operator
Member (Idle past 5142 days)
Posts: 630
Joined: 07-24-2009


Message 144 of 315 (516847)
07-27-2009 4:57 PM
Reply to: Message 141 by Richard Townsend
07-27-2009 4:25 PM


quote:
Thanks for explaining your thinking on this. I think you are misinterpreting the theorems. The theorems apply when considering a search across the space of ALL possible cost functions. They don't rule out more effective algorithms across narrower scopes than this.
That's obvious. But that means that this algorithm has been optimized for that kind of search.
quote:
Secondly, you're wrong to say that random search can create no information. The search for your keys, for example, would create information about the location of your keys even if it were purely random.
In fact, randomness (as you know) is a key element in many evolutionary algorithms. It's not something we want to get rid of
It didn't create information, you had to create it by finding the key. If you actually found you keys on first try every single time randomly by serching, now that would be creating information from nothing. The fact itself that you are searching means you have no information, so you have to create it by searching.
If you knew where the keys were, you wouldn't be searching in the first place, right?
quote:
I don't know much about the CSI concept. Does it have this non-material / non algorithmic element built into the definition of it?
No, it doesn't. I explained it a while back.

This message is a reply to:
 Message 141 by Richard Townsend, posted 07-27-2009 4:25 PM Richard Townsend has replied

Replies to this message:
 Message 147 by Richard Townsend, posted 07-27-2009 5:30 PM Smooth Operator has replied
 Message 188 by kongstad, posted 07-30-2009 9:22 AM Smooth Operator has replied

Smooth Operator
Member (Idle past 5142 days)
Posts: 630
Joined: 07-24-2009


Message 159 of 315 (516978)
07-28-2009 3:29 PM
Reply to: Message 145 by Perdition
07-27-2009 5:13 PM


quote:
But you know what? Improbable things happen all the time, and the probability often depends on how one looks at it. Until you can provide a mathematical formula for this probability, then apply the formula to something specific, then show why the probability becomes zero (which it must, otherwise you're admitting it is possible for the thing to happen) you have nothing.
Yes that's true. It will happen, but most of the time it won't. And on the average it won't either. That's what the NFL says too.
quote:
Yes it can. In fact, it often does. Show me how it can't. If you see that no keys are found on your celing, why would you look on the ceiling in another house? After looking at many houses, and finding that no keys are ever found on the celings in any house, doesn't that make it less probable that keys will be found on the celings of the next house? Doesn't this information ceom from the random first process, refined through subsequent iterations?
Maybe because they could be there? But again, that's not the point. What you are doinf is using prior information. I said, that you should use the exact same process, not modified. That's what NFL is about.
So let's say you have a house with 9 rooms, and all rooms are the same size. You don't know where the keys are. You maek al algorithm that searches the rooms in this order:
7, 6, 4, 2, 1, 8, 5, 9, 3.
And after you find the keys, you go to another 10 room house. No the question is, will this exact same path give you better results than random chance averaged ovar all houses? The answer is no it won't. Neither will any other path. They ahve on average the same chance. Because in soem houses the keys will be in the room 4, and in soem houses they will be in the room 3. So in soem houses you will get there sooner, and in soem houses you will get to the keys later. And on teh average it will be the same as a random search.
quote:
Yes, so the first random search generated information you could apply to the second house. How can you say you wouldn't do any better? You can eliminate search options because of the first search, thus making it take less time to exhaust all possibilities in the second.
I said you can't use that info.
quote:
It doesn't have to always work, it only has to work more often than not. And then when I find a new situation for which it doesn't work, the final solution gets factored into my new "search information."
But it won't work more times. It will be average.
quote:
Why do you think it has to be better in all cases? It only has to be better in most for it to be a worthwhile algorithm to use. There may be a better way in one instance, and in fact, we can often come up with better ways to design things in nature than the way they turned out because the process isn't perfect. That's my point.
Exactly, but for to design them, you have to have prior information.
quote:
Yes, but that prior information was generated by the first random search, and then gets incorporated. Thus, information can arise out of a random process. Once you get information, all you have to do is add to it.
It can't arise from a random process, because you are doing that process and checking the rooms. You are creating information.
quote:
No, but they're similar enough for a process created in one to be a benefit in another.
But again, that is prior assumption about other houses.
quote:
All car models are slightly different, but I don't have to learn how to drive each type of car individually. I can learn on one, and apply the knowledge from that to the others.
True but you can't apply that knowledge on all transportation methods like flight. You can on cars because they are similar. But that again means that you made prior assumption on which kind of transport will you take.
quote:
No, in that case, it won't help in that one instance, but after that one instance, you've learned something more, and expanded the circumstances under which your process will now work. It adapts to a new environment you might say.
But that is than prior knowledge for the third search.
quote:
Right, so he starts at square one again, and starts with nothing, then builds a process for all experiences that are similar to this new one. Given enough time, you'll experience enough different sets of circumstances to have a process in your repertoire to deal with just about any subsequent experiences.
Yes, but for that it takes time and trials. And on every trial you build new information. So what the NFL says is that you can't make an algorithm and apply it without any trials to all sequence spaces with better than average results.

This message is a reply to:
 Message 145 by Perdition, posted 07-27-2009 5:13 PM Perdition has replied

Replies to this message:
 Message 165 by Perdition, posted 07-28-2009 4:52 PM Smooth Operator has replied

Smooth Operator
Member (Idle past 5142 days)
Posts: 630
Joined: 07-24-2009


Message 160 of 315 (516979)
07-28-2009 3:30 PM
Reply to: Message 147 by Richard Townsend
07-27-2009 5:30 PM


quote:
Think this through. I'm saying that the search creates the information - clearly it does, because we know something at the end we didn't at the beginning. This meets the Shannon definition of information (decrease in uncertainty of a receiver). The same information is created no matter how we get there. You almost acknowledge that in your paragraph above - see last sentence.
But you are the one searching. You made the search. You put in the information of where you are going to search.

This message is a reply to:
 Message 147 by Richard Townsend, posted 07-27-2009 5:30 PM Richard Townsend has not replied

Smooth Operator
Member (Idle past 5142 days)
Posts: 630
Joined: 07-24-2009


Message 161 of 315 (516980)
07-28-2009 3:38 PM
Reply to: Message 151 by Percy
07-27-2009 9:02 PM


quote:
And you somehow know this without a copy of the program?
Because ALL algorithms work like that. Even the simple calculators. Can they give you a number that was not programmed into them? No, obviously not.
quote:
Genetic algorithms work in the same way as evolution. They're simply computational models of evolution. The reason you deny the possibility of random mutations is because they create new information, and the random mutations generated by genetic algorithms create new information in the same way. But GA's are not the topic of this thread. You should probably propose a new thread if that's what you want to talk about.
But genetic algortihms in a simulation were designed to find the specified target. Evolution in real life has no knowledge about what it is looking for? And what kind of information are you talking about? Are you saying that random mutations are able to produce Shannon information? Yes that is true. But not CSI.
quote:
What you describe has never been observed to happen. When under antibiotic stress the bacteria that survive experience a wide variety of different mutations. The bacteria that happened to receive resistance-conferring mutations survive and pass these mutations on to the next generation. It's the familiar process of descent with modification followed by selection of the organisms that will contribute to the next generation.
Actually we are talking about the same thing, only in different terminology.
quote:
No, I had it right, but I was insufficiently clear. The article "it" referred to the genetic repair mechanism, not the LexA.
Than why doesn my article say that LexA has to be turned on for resistance to be acquired?

This message is a reply to:
 Message 151 by Percy, posted 07-27-2009 9:02 PM Percy has replied

Replies to this message:
 Message 166 by Parasomnium, posted 07-28-2009 5:04 PM Smooth Operator has replied
 Message 170 by Percy, posted 07-29-2009 4:00 AM Smooth Operator has replied

Smooth Operator
Member (Idle past 5142 days)
Posts: 630
Joined: 07-24-2009


Message 162 of 315 (516981)
07-28-2009 3:45 PM
Reply to: Message 152 by Percy
07-27-2009 9:29 PM


quote:
And what two algorithms are you comparing in the bacteria?
A random search and an evolutionary algorithm.
quote:
But creating new information is precisely what random chance does. Here's an example.
Consider a specific gene in a population of bacteria that has three alleles we'll call A, B and C. For lurkers not familiar with the term, alleles are variants of a single gene. One familiar example is eye color. The eye color gene has several alleles: brown, blue, green, etc. Human eye color depends upon which one you happen to inherit. Eye color isn't really this simple of course, but this hopefully gets the idea of alleles across.
So every bacteria in the population has either the A allele, the B allele or the C allele. We can calculate how much information is required to represent three alleles in this bacterial population. It's very simple:
log23 = 1.585 bits
Now a random mutation occurs in this gene during replication and the D allele appears. Through the following generations it gradually spreads throughout the population and becomes relatively common. There are now four alleles for this gene, A, B, C and D. The amount of information necessary to represent four alleles is:
log24 = 2 bits
The amount of information required to represent this gene in the bacterial population has gone from 1.585 to 2 bits, an increase of .415 bits, and an example of random chance increasing information.
You are wrong because youa re using the wrong definition of information. All of the necessary information was already there. Natural selection just selected it and copied it on the expense of others. But the information for the specific genes was already there, evolution didn't create them. It just spread them.
And again, that is what Shannon would call information. The sheere number, does not represent biological information. You have to represent biological functions. If you just multiply the number of genes, you still have the same biological functions, only more genes, which don't give you no new functions.
quote:
Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism.
You can't use Shannon's information and apply it to biological information because it only concerns itslef with statistical aspect of information. It still has to take into account syntax and semantics.

This message is a reply to:
 Message 152 by Percy, posted 07-27-2009 9:29 PM Percy has replied

Replies to this message:
 Message 171 by Percy, posted 07-29-2009 4:33 AM Smooth Operator has replied

Smooth Operator
Member (Idle past 5142 days)
Posts: 630
Joined: 07-24-2009


Message 163 of 315 (516982)
07-28-2009 3:48 PM
Reply to: Message 155 by Rrhain
07-28-2009 4:48 AM


quote:
But the E. coli experiment I have often described here proves that to be false. If what you say is true, then the entire lawn should act as one: Either the entire lawn survives or the entire lawn dies. Because the entire lawn is descended from a single ancestor. Any genetic "information" in the lawn is present in all of them for the only source of "information" was from that single ancestor.
No, becasue that information was copied. A copied information is not new semantic information.
quote:
Therefore, either the entire lawn is immune to T4 phage or the entire lawn is susceptible and dies.
Instead, what we see is that while much of the lawn dies, some colonies remain alive.
Therefore, new information must be in the colonies that survived.
Your claim of no information necessarily means that if one can do it, then all of them can do it. But instead, we see that some can and some can't.
Therefore, new information must be present.
Resistance is acquired by loss of information.

This message is a reply to:
 Message 155 by Rrhain, posted 07-28-2009 4:48 AM Rrhain has replied

Replies to this message:
 Message 173 by Rrhain, posted 07-29-2009 6:27 AM Smooth Operator has replied

Smooth Operator
Member (Idle past 5142 days)
Posts: 630
Joined: 07-24-2009


Message 164 of 315 (516984)
07-28-2009 3:51 PM
Reply to: Message 158 by Perdition
07-28-2009 11:04 AM


Re: Information Evolving
quote:
You know Dembski? The guy who created this unsubstantiated claim? He disagrees with you.
No he doesn't. When I say information I mean CSI, unless stated otherwise. Because we are here talking about biological information, and Shannon's information can not be applied to biological functions.
quote:
He claims that CSI can't be created, but CI and SI can. He proposes no mechanism that stops CI from becoming specified, and thus CSI or the other way, SI becoming complex and thus CSI.
Yes he does and it's called the NFL theorem.
quote:
He claims that natural laws can't create information (but chance can) they can only shuffle around or lose information, but again, he asserts this without proposing a reason that a natural law can't create information, since the sun burns via natural laws and creates information all the time.
What the hell are you talking about? Sun is not creating any new information! That is a meaningless statement.

This message is a reply to:
 Message 158 by Perdition, posted 07-28-2009 11:04 AM Perdition has not replied

Smooth Operator
Member (Idle past 5142 days)
Posts: 630
Joined: 07-24-2009


Message 167 of 315 (517030)
07-29-2009 2:07 AM
Reply to: Message 165 by Perdition
07-28-2009 4:52 PM


quote:
Almsot everything of a complex nature is improbable from the view of the end result. The fact of the matter is, any end result would have been equally improbable, so relying purely on probability is a losing game.
Well that's an obvious statement. And true at that. That is why I am not relying on probability alone. CSI is formed that it takes int account not only probability (complexity), but also specificity. So it's not just that you have to get a highly unprobable event to happen. But an exact highly unprobable event to happen. Which is not possible.
quote:
Ok, but then you're not modelling anything in reality, so I'm not sure what your model is trying to prove. Everything retains elements from previous iterations, and so will generate information based on those previous iterations that can be used in subsequent ones.
I know it would. But NFL states that if you do not use that generated knowledge your result will be the same on average. Which I think you agree.
quote:
I may have missed this, but can you please lay out the math that let's you determine this? I don't mean an explanation, I want the actual mathematical formula you used to conclude this.
It's all here, enjoy.
CiteSeerX — No free lunch theorems for optimization
quote:
Yeah, ok, so the fact that I have prior information and can come up with better solutions than nature shows that nature didn't have that prior informatrion, right? So, either the designer of nature lacked the information, didn't use the information, or nature happened on it's own. So, nature is either natural or the designer is inept.
No, it just means evolution can't get you what you think it can. It's that simple.
quote:
Why do you say this. As I've said before, this is an assertion. And even Dembski disagrees with you. He agrees that chance can create Specified information, or complex information. He only, for some inexplicable reason, stops at it creating CSI.
And I explained to you that by information i do mean CSI as Dembski does. Specified information and Complex information are just parts of what CSI really is. And I also explained why chance is not able to generate that.
quote:
Not really. It's merely using everything I have at my disposal, whihc includes and is limited to, the information I learned in the first house. The fact that the next house is similar, just means the information I learned in one is at least partially applicable to the next one.
Yes, and that's called a prior assumption about the next search.
quote:
I can apply some information from cars to flight. I was just in a plane on Saturday. The wheel in front of me turned the same way to make the plane turn the same way. The pedals in front of my feet were similar to the pedals in a car. The things they controlled were different, but I already knew how to use the pedals, if not what they were controlling, so yes, you can apply some information. The amount of information you can apply is directly proportional to the similarity of the situation. The more similar, the more information you can use.
But you can't use all of it. Which means that some of the information you will have to learn by new trials.
quote:
Yes, but that prior knowledge (or information) was generated through the initial random process and refined through the next, less random, process. It's how we build information in real life.
Yes, we agree on that, so isn't it now obvious to you that you have to go through all this trials to know what you are going to do in the next one to perform better? Isn't it clear to you now that you can't invent an algorithm that will work better than any other in the first try? You have to make all those trials to gain information to be able to construct the better algorithm. That's what NFL is all about. It's actually very simple logic.
quote:
But in eveolution, we have many, many trials. Each time a new organism is created (born, divided, etc) we have a trial because the next generation is never the same as the previous one. There is always something different.
But evolution has no knowledge of what it is searching for so every trial for evolution is like a first random trial.

This message is a reply to:
 Message 165 by Perdition, posted 07-28-2009 4:52 PM Perdition has not replied

Smooth Operator
Member (Idle past 5142 days)
Posts: 630
Joined: 07-24-2009


Message 168 of 315 (517031)
07-29-2009 2:08 AM
Reply to: Message 166 by Parasomnium
07-28-2009 5:04 PM


quote:
Of course not. The very reason genetic algorithms are utilized is that they are able to find unspecified solutions. If you had to specify the solution beforehand, it would be a pointless exercise, wouldn't it?
It's specified by constraints on the search landscape. The rogrammer makes constraints so the search algorithm will give him desired results.

This message is a reply to:
 Message 166 by Parasomnium, posted 07-28-2009 5:04 PM Parasomnium has replied

Replies to this message:
 Message 169 by Parasomnium, posted 07-29-2009 3:42 AM Smooth Operator has replied

Smooth Operator
Member (Idle past 5142 days)
Posts: 630
Joined: 07-24-2009


Message 180 of 315 (517175)
07-30-2009 4:04 AM
Reply to: Message 169 by Parasomnium
07-29-2009 3:42 AM


Re: Constraints
quote:
Do you mean by this that if the researcher sets up an evolutionary process to evolve, say, designs for electronic oscillators - so the constraint would be "make me an oscillator" - we should not expect it to evolve, for example, a radio receiver, correct?
Yes, something like that. You will get a kind of oscillator that the computer optimizes for you.

This message is a reply to:
 Message 169 by Parasomnium, posted 07-29-2009 3:42 AM Parasomnium has replied

Replies to this message:
 Message 186 by Parasomnium, posted 07-30-2009 6:55 AM Smooth Operator has replied

Smooth Operator
Member (Idle past 5142 days)
Posts: 630
Joined: 07-24-2009


Message 181 of 315 (517177)
07-30-2009 4:33 AM
Reply to: Message 170 by Percy
07-29-2009 4:00 AM


quote:
You've been misinformed. Here's a simple C++ program that multiplies two integers. There are no numbers programmed into it:
If you have access to a C++ compiler then give it a try - it works, and as you can see, no numbers are pre-programmed in.
No I haven't. I program in C#, so I know what I'm talking about. Anyway, no the numebrs are not programmed in, but by the look of that syntax, you are supposed to input them at runtime. And the action to multiply them is programmed in. So the computer has all teh information for this to work properlly. YOu gave it all the information it needed.
quote:
A genetic algorithm models evolution, just as meteorological programs model the weather, or NASA programs model the trajectories of spacecraft. The answers are not already programmed into these programs. What would be the point of writing a program to find an answer you already know?
You misunderstood me. I didn't exactly mean that ALL the numbers are programmed in. The algorithms for those numbers are programmed in. YOu gave the computer enough information to process it to get the desired result. If you didn't it would give you no result.
The point is for the computer to do the boring job of calculation faster than you. It is given a serach space, that people are too boring to search themselves. All teh answers are already there, but to find them we have to do a lot of calculations to find them. That is why we use computers. To do the dirty work, so to speak.
quote:
Numbers are not programmed into simple calculators, either. Do you really believe that somewhere in your calculator is a "2" times table for all the possible numbers you can multiply by "2" and the answers, and a "3" times table for all the possible numbers you can multiply by "3" and the answers, and so on? Calculators and computers today use ALUs (Arithmetic Logic Units) that at their heart are just gates and flops implementing complex functions like multiplication from simpler functions like full adders. (Just for completeness I'll mention that there are tables of numbers involved for the proper representation and manipulation of certain standards, like the IEEE standard for fixed and floating point values.)
Let me see if I can put it this way. My Visual Studio ain't working so I'll se what I can do from the top of my head.
If you want to initialize 4 variables you won't do it like this, but you could.
int[0] a = 0;
int[1] a = 1;
int[2] a = 2;
int[3] a = 3;
etc.
you can write something like this:
for(i = 0; i < 4; i++)
{
a[i] = i ;
}
You will get the same result. The outcome is the same. You didn't have to type all the numbers in by the hand, but you did have to program in the algorith that tdoes this of equal informational value. This algorithm will not get your variable to to have a value of "23" or the word "cat". It simply has no information for it, because you didn't put it in.
quote:
What would be the point of writing a program to find a solution you already know? The target of genetic algorithms is not specific. The solution is not known in advance, just as you presumably don't know in advance the product of two numbers you enter to the multiply program. Genetic algorithms are seeking a solution in the design space that satisfies specified parameters. They are a very effective method of exploring very large design spaces that couldn't be successfully explored using more random permutational techniques.
No, you don't know the solution. You have a rough estimate. The point is to save time, so you don't have to do it.
quote:
Yes, just like the genetic algorithms that model evolution. There's a set of parameters evolution seeks to satisfy that in the aggregate are equivalent to survival to reproduce, but it has no specific goal.
And all of those parameters are put in by an intelligence. If there were no initial parameters, the algorithm would do no good.
quote:
CSI is just a concept made up by William Dembski. I can tell you how much information is in a stretch of DNA. If CSI had any reality then you could tell me how much CSI was in the same stretch, but you can't.
This is just silly. If you read the No Free Lunch by Dembski you will se he calculated the CSI for a flagellum.
quote:
If CSI were real then ID scientists around the globe would be making new discoveries every year based upon the CSI concept, improving and extending our knowledge of our world and universe. Advances in the development of new drugs would be carried out by scientists applying the principles of CSI instead of evolution. The next generation of scientists would be flooding to Bible colleges and the Discovery Institute so they'd have the best chance of winning the Nobel Prize. And William Dembski would himself receive the Nobel Prize, be knighted by the queen, and receive world-wide approbation.
Instead Dembski is a professor at Southwestern Baptist Theological Seminary in Fort Worth, Texas, where he teaches courses in its Department of Philosophy of Religion, and CSI has no standing within the scientific community whatsoever because in truth it is just a prop invented to give a scientific-looking veneer to what at heart is just the religious concept of special creation by God.
This is no more than slander. If you look up Dembski at wikipedia you will see more than a philosophy degree. And please do look up Biologic institute where ID science is being done. Just because you don't know about it, doesn't mean it isn't there.
Biologic Institute
quote:
I don't think so. Your position is resistance-conferring mutations are a deterministic result of the presence of antibiotics. My position is that resistance-conferring mutations are the ones selected from the millions of mutations that actually occur.
Actually, the resistance is bound to happen sooner or later and is going to be selected. We agree on this one.
quote:
Do you mean Inhibition of Mutation and Combating the Evolution of Antibiotic Resistance? I don't see anywhere in the paper where it refers to LexA turning on and off. It talks about LexA derepressing the SOS response mechanism when cleaved. LexA turning on and off is terminology you invented yourself.
Cleaved, or uncleaved, turned on or off, call it what you will. It's talking about interfering with it's activity.

This message is a reply to:
 Message 170 by Percy, posted 07-29-2009 4:00 AM Percy has replied

Replies to this message:
 Message 194 by Percy, posted 07-31-2009 7:37 AM Smooth Operator has replied

Smooth Operator
Member (Idle past 5142 days)
Posts: 630
Joined: 07-24-2009


Message 182 of 315 (517178)
07-30-2009 4:39 AM
Reply to: Message 171 by Percy
07-29-2009 4:33 AM


quote:
Evolution already performs a random search because mutations are random. How is your random search different from evolution?
It's not. That's the problem for you.
quote:
I'm using Shannon information.
Which can't be used for biological functions.
quote:
You think the information for allele D was already there? Where was it then?
The reason you can't answer that question is because allele D was caused by a random change (a mutation) to one or more nucleotides of allele A, B or C. It didn't exist before the mutation occurred. It appeared out of thin air, created by random chance.
All the genes are already in the genome. They do not appear from somewhere. There can be a genetic duplication. But the product is the same gene.
quote:
Shannon information can be applied to anything in the real world, including DNA.
No, it can not, because it deals only with statistical aspect of information.
quote:
Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism.
NCBI
quote:
In evolution the information problem is one of how to reliably communicate the specific set of messages contained in the DNA to the next generation.
No, that's the problem for the cell's machines themselves. The problem for evolution is how did the information get there in the first place.
quote:
All the alleles of all the genes of a population form the complete message set, and each individual in the population possesses a specific subset of that message set that it needs to communicate to offspring during reproduction. Any errors in communication of this DNA message to offspring are retained by the offspring and become part of the population's collective genome, making the message set larger and increasing the amount of information.
No, this is wrong. Thi has never been observed. It is true that mistakes happen, and that they get passed on. But it is not true that informational content increases. It can only degrade, over time.
quote:
Semantics are irrelevant in information theory.
No, semantics are irrelevant to Shannon's model of information. It was the first and most primitive model. Information theory has moved on since.

This message is a reply to:
 Message 171 by Percy, posted 07-29-2009 4:33 AM Percy has replied

Replies to this message:
 Message 195 by Percy, posted 07-31-2009 8:03 AM Smooth Operator has replied

Smooth Operator
Member (Idle past 5142 days)
Posts: 630
Joined: 07-24-2009


Message 183 of 315 (517179)
07-30-2009 4:44 AM
Reply to: Message 173 by Rrhain
07-29-2009 6:27 AM


quote:
Irrelevant. Whether or not the information was copied has nothing to do with why some of the bacteria live and some of the bacteria die despite all the bacteria being descended from a single ancestor.
By your claim that no new information can ever be created, all the bacteria necessarily behave in exactly the same way. If one die, then all die. If one lives, then all live. No exceptions.
But we see exceptions. Some of the bacteria live and some die.
Thus, our premise must be false: New information necessarily was created.
No, some live and some die because come confer resistance for an example. But this resistance is aquired by degradation of existing information. Not by an increase.
quote:
Incorrect.
Yes, verry correct. Look at table one. All resistances have been aquired by loss of information.
http://www.trueorigin.org/bacteria01.asp
quote:
Because you can rerun this experiment by taking one of the K-4 bacteria and letting it be the sole ancestor to the lawn. When you re-infect the lawn with T4 phage, we find not that the lawn survives but rather that the lawn starts to die.
Now, this time it's the phage that has mutated to T4h.
We can keep this up, having the two continually mutate to change to the new environemtn. By your logic, we should eventually wind up with nothing as all that "information" gets lost. But it doesn't. The bacteria and phage keep surviving, keep changing.
How can they do that if they keep "losing information"?
Simple. Because the mutatins deform their receptors to different antibiotics in differnt way.
quote:
Some simple questions:
Which has more information: A or AB?
Which has more information: A or B?
Which has more information: A or AA?
Depends by which definition of information.

This message is a reply to:
 Message 173 by Rrhain, posted 07-29-2009 6:27 AM Rrhain has replied

Replies to this message:
 Message 185 by Wounded King, posted 07-30-2009 5:26 AM Smooth Operator has replied
 Message 204 by Rrhain, posted 08-02-2009 5:14 AM Smooth Operator has replied

Smooth Operator
Member (Idle past 5142 days)
Posts: 630
Joined: 07-24-2009


Message 184 of 315 (517180)
07-30-2009 4:49 AM
Reply to: Message 174 by PaulK
07-29-2009 7:50 AM


Re: Three failures of CSI
quote:
1) It was intended to formalise the way that humans recognsie design. It doesn't. By relying on purely negative argumentation - "design" is even defined negatively - it ignores the fact that we work with ideas of what designers do, and more importantly how designs are implemented. Although not a fatal flaw in the method, we should recognise that it is less than it was meant to be - and also that ID proponents who attempt to co-opt all instances of design detection as uses of CSI are wrong to do so.
Actually it is a deductiove method. If you remove chance and necessity you are left with design as a logical conclusion. Why because that is the inference to the best explanation. Since from experience we know that an intelligence can create design. So it is logical to infer design, and say an intelligence had a part in it.
quote:
2) It is impractical to use in many cases - including the very cases where ID proponents would like to use it. (Dembski has even complained about it, although for some reason blaming his opponents rather than himself.) For this reason alone CSI has little real significance to the discussion of ID versus evolution - except, perhaps, as an example of ID's failures.
Dembski has calculated the CSI for the flagellum. Read the No Free Lunch.
quote:
3) The constraint imposed by specification is too loose. Dembski treats a specification constructed after the fact - knowing and using the outcome to produce the specification - to be the same as a prediction made in advance. But this is not the case. There is stll an element of "painting the targets around the bullet holes". There may be many other results which would also be found to be "designed" - and Dembski's methodology ignores this.
This is actually a serious problem - for any non-trivial specification the probability calculated will be too low, and cannot be validly compared to the probability bound. Even if the method were revised to take this into account the new method would be even less practical.
No, actually he doesn't. He specifically says that what you are calling is a fabrication, not a specification. When you can describe a pattern of an event, without looking at that event first, you have a specification. So it's not after the fact.

This message is a reply to:
 Message 174 by PaulK, posted 07-29-2009 7:50 AM PaulK has replied

Replies to this message:
 Message 187 by PaulK, posted 07-30-2009 7:52 AM Smooth Operator has replied

Smooth Operator
Member (Idle past 5142 days)
Posts: 630
Joined: 07-24-2009


Message 196 of 315 (517603)
08-01-2009 9:09 PM
Reply to: Message 185 by Wounded King
07-30-2009 5:26 AM


quote:
OK, that whole article is ridiculous but that table simply doesn't support your claim.
In what way is a mutation which changes gyrase in such a way as to reduce its affinity to Fluoroquinolones a loss of information for the gyrase?
Because it's a loss of specificity.
quote:
The author is simply demented. He wants us to think that Gyrase has evolved to bind Fluoroquinolones? Does he even understand what these antibiotics are? These are synthetic chemicals which have been developed specifically to inhibit bacterial growth or kill bacteria. It is like thinking that my car has a cigar lighter plug point because it was designed so I could plug my iPod car charger into it. He is getting cause and effect mixed up. So since gyrases clearly didn't evolve to function as Fluoroquinolone binding molecules how on Earth can it be considered a loss of function when their affinity is reduced?
Wrong! It's the antibiotics that have been designed to bind to the gyrase. When the gyrase looses it's afinity, it looses information.
quote:
There are numerous valid cases where resistance is the result of a genuine information loss such as null mutations removing an entire gene. But the whole argument is undermined by this idiotic attempt to describe every form of resistance as a loss of information/function when you are defining function as being 'binds to antibiotic'.
Than show me a case where there is an increase in information.
quote:
I think this is all bound up with the approach that whatever the starting state was of an organism, protein or gene sequence when it was first studied is somehow enshrined for IDist/creationists as being the ideal state so any change from that state must necessitate a loss of function/information.
Again wrong. The organism can fluctuate. But only in the already existing informational range. The genome itslef is constantly deteriorating.
quote:
Will you at least concede that to consider binding affinity for an antibiotic to be an evolved function of the bacteria is nonsensical?
Of course, since I never said that it was. Where you came up with it I have no idea.

This message is a reply to:
 Message 185 by Wounded King, posted 07-30-2009 5:26 AM Wounded King has replied

Replies to this message:
 Message 202 by Wounded King, posted 08-02-2009 4:46 AM Smooth Operator has replied

Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024