Register | Sign In


Understanding through Discussion


EvC Forum active members: 64 (9163 total)
6 online now:
Newest Member: ChatGPT
Post Volume: Total: 916,418 Year: 3,675/9,624 Month: 546/974 Week: 159/276 Day: 33/23 Hour: 3/3


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   What is an ID proponent's basis of comparison? (edited)
Richard Townsend
Member (Idle past 4753 days)
Posts: 103
From: London, England
Joined: 07-16-2008


Message 136 of 315 (516823)
07-27-2009 3:55 PM
Reply to: Message 131 by Smooth Operator
07-27-2009 2:34 PM


It is computable, the algorithm just can't generate it. They can only process it.
If no algorithm can generate it, then it's non-computable. That's the definition of non-computable. But the main problem is, your claim creates problems for the very existence of CSI.
We don't know whether humans run algorithms in their brains (most AI researchers believe so but some thinkers, such as Roger Penrose disagree).
This means there is NOTHING we can definitely point to as CSI. Nothing created by humans can be called CSI. Nothing created by any 'intelligent designer' can be called CSI - unless you can show they did it non-algorithmically.
Tracking back, I believe the mistake in your reasoning is when you say that algorithms can't create CSI. They can.
It applies to all algorithms. It has been shown to be true. I have already posted a link here about the NFL theorem that says that algorithms do not produce new information. It really gets on my nerves to have to do it again and again.
The NFL theorems, curiously, do not say this, no matter how many times you claim that they do. Here's a quote from the Wolpert paper.
A number of no free lunch (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class.

This message is a reply to:
 Message 131 by Smooth Operator, posted 07-27-2009 2:34 PM Smooth Operator has replied

Replies to this message:
 Message 138 by Smooth Operator, posted 07-27-2009 4:09 PM Richard Townsend has replied

Smooth Operator
Member (Idle past 5135 days)
Posts: 630
Joined: 07-24-2009


Message 137 of 315 (516825)
07-27-2009 4:02 PM
Reply to: Message 135 by Perdition
07-27-2009 3:50 PM


quote:
Again, this is an assertion.
Nobody ever observed it, and there is no such capacity. So it's a fact.
quote:
In your example, the random search of your house turns up other information you can use to build prior information for your next search in someone else's house.
NO YOU CAN'T! That's the point! Did the other guy lose the keys in the EXACT same place in his house as you did in yours? Are your houses identical? No, ofcourse not!
quote:
For example, you notice that nothing is sitting on the ceiling, that reduces the number of places you need to search next time.
Does that count for other houses with lost keys? No, obviously it doesn't.
quote:
You can also say, well, since I found them on the bedside table last time, rather than going through all my previous steps, I'll start there next time.
And will they be there 100%? No, they won't!
quote:
Even in a new house, that's a good place to start as it assumes people are generally the same.
That's called prior information.
quote:
All of this is without using prior nowledge the first time, and using the new information the second time.
Which doesn't help you in other case at all.
quote:
If the situation is similar, the exact conditions don't matter, the process can still work faster than random.
Well that's the point! IF IT IS SIMILAR! But what if it's not!? Than you will fail! And that means no algorithm works better than any other, or a random search, without prior information.

This message is a reply to:
 Message 135 by Perdition, posted 07-27-2009 3:50 PM Perdition has replied

Replies to this message:
 Message 140 by Perdition, posted 07-27-2009 4:12 PM Smooth Operator has replied

Smooth Operator
Member (Idle past 5135 days)
Posts: 630
Joined: 07-24-2009


Message 138 of 315 (516827)
07-27-2009 4:09 PM
Reply to: Message 136 by Richard Townsend
07-27-2009 3:55 PM


quote:
If no algorithm can generate it, then it's non-computable.
No algorithm can generate anything without prior information.
quote:
That's the definition of non-computable. But the main problem is, your claim creates problems for the very existence of CSI.
No you misunderstood what I was sying. I was saying that it is computable, but you won't wind up with more information than you inputed at the start. No algorithm can do that.
quote:
We don't know whether humans run algorithms in their brains (most AI researchers believe so but some thinkers, such as Roger Penrose disagree).
I disagree also. Since we can think. Our mind is not material like a computer.
quote:
This means there is NOTHING we can definitely point to as CSI. Nothing created by humans can be called CSI. Nothing created by any 'intelligent designer' can be called CSI - unless you can show they did it non-algorithmically.
Yes it can since it's more than 400 bits. The whole of observable universe could not have created more than 400 bits since it's origin. If we were just a part of this material universe, with no non-material mind, we wouldn't be able to produce more than 500 bits. But we are!
quote:
Tracking back, I believe the mistake in your reasoning is when you say that algorithms can't create CSI. They can.
No, they can't. Please provide where does it say they can.
quote:
The NFL theorems, curiously, do not say this, no matter how many times you claim that they do. Here's a quote from the Wolpert paper.
That is but one of houndreds of lines in his paper. Which just proves my point that one algorithm will work well on one landscape, and not so good on another on average, given that it has prior knowledge about the problem.

This message is a reply to:
 Message 136 by Richard Townsend, posted 07-27-2009 3:55 PM Richard Townsend has replied

Replies to this message:
 Message 142 by Richard Townsend, posted 07-27-2009 4:31 PM Smooth Operator has not replied

DevilsAdvocate
Member (Idle past 3122 days)
Posts: 1548
Joined: 06-05-2008


Message 139 of 315 (516828)
07-27-2009 4:11 PM
Reply to: Message 119 by traderdrew
07-27-2009 8:56 AM


Traderdrew writes:
It seems to me that you should prove to me that CSI is not suitable for detecting design.
Argument from ignorance/Negative Proof Fallacy. Why should I provide evidence for something you are trying to prove??
First you need to adequately define CSI...
TJ writes:
Or what you can do is prove to us that new amounts of of CSI containing at least 400 bits can be produced by natural causes.
I assume by 400 bits you are talking about DNA? Please elaborate and educate the CSI illiterate masses.
TJ writes:
Me writes:
A natural arch formed by water and wind erosion can have a specific function and use by animals and humans. A cave can as well. Is there an intelligent agent behind the formation of these natural phenomena? There is nothing magical or special about these natural phenomena. We attribute meaning to them precisely because they do seem to conform to our needs and desires. This is in a way a form of anthropocentrism.
Yes but it wasn't necessarily designed by an intelligence. It was designed by the forces within chaos. The cave doesn't produce or communicate any CSI.
Again define CSI. If I showed you two caves that were identical and one was human made and one created by the forces of nature would this not negate your CSI argument?
TJ writes:
The termites build the mound with cooperation. The mound doesn't need to have any particular elucidean shape. Different mounds have different shapes. They don't need to conform to particular mathematical models.
Neither does the morphology and physiology of biological life intrinsically 'need' to fit a certain standard. Biological life much like other natural occurring phenomena is shaped by the environmental conditions in which it exists.
TJ writes:
Forces such as heavy rain can effect the shapes of the mounds.
Forces such as electromagnetic radiation and chemical agents can effect the composition of the genome and ultimately the shapes (morphology) of organisms.
TJ writes:
Me writes:
You can't even adequately define CSI, how can you expect anyone else to understand WTF you are talking about??
With sentences like these I get the impression that you are trying to make us look bad rather than attempting to investigate what CSI is yourself.
I apologize, I just get frustrated when people throw around terms without defining them or understanding them, themselves. When you deal with people like SO and the like, sometimes we mistakenly throw you under the same proverbial bus. It is a human vice which I fall prone to as well.
TJ writes:
Me writes:
So what is complex and not complex in nature?
I'm not sure if I can draw the lines there. I suspect complexity is represented in natural phenomenon with different degrees of fractal dimension.
Agreed but if you cannot make the distinction between chaos and non-chaos are we sure there really is a substantial difference between the two?
TJ writes:
You are making me think.
That is the whole purpose I post on this board, not just for you but for all of us.
TJ writes:
Chaotic things are natural phenomenon that defy traditional linear measurements.
Hmm, did you just make up that definition or where did you draw it from. This is unlike any definition of the word 'chaos' I have seen. What do you mean by linear measurements? Can we not predict to a degree the amount of erosion that will occur in a river on a yearly basis? Is that a 'chaotic thing'? So what specifically falls into your category of 'chaotic things'?
Edited by DevilsAdvocate, : No reason given.
Edited by DevilsAdvocate, : No reason given.

For me, it is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring.
Dr. Carl Sagan

This message is a reply to:
 Message 119 by traderdrew, posted 07-27-2009 8:56 AM traderdrew has replied

Replies to this message:
 Message 156 by traderdrew, posted 07-28-2009 9:08 AM DevilsAdvocate has not replied

Perdition
Member (Idle past 3259 days)
Posts: 1593
From: Wisconsin
Joined: 05-15-2003


Message 140 of 315 (516830)
07-27-2009 4:12 PM
Reply to: Message 137 by Smooth Operator
07-27-2009 4:02 PM


NO YOU CAN'T! That's the point! Did the other guy lose the keys in the EXACT same place in his house as you did in yours? Are your houses identical? No, ofcourse not!
How do you know? Is it possible, that if you lost your keys on your nightside table, and a lot of other people put their keys on the nightside table, that maybe this other person put their keys on their nightside table?
Even if they didn't, the knowledge you gained from your first search, can help you in your second.
ie, no keys on the ceiling, no keys in places to small for keys to fit. If you're truly operating from no prior knowledge in the first case, you would have to consider those possibilities the first time, but could rule them out the second.
Does that count for other houses with lost keys? No, obviously it doesn't.
It does if you assume the conditions are similar, and until you find they aren't, this is a good assumption to make.
And will they be there 100%? No, they won't!
They don't need to be there 100%, they just need to be there more often than not.
That's called prior information.
No, it's not information, it's an assumption. I generally assume things are similar to previous experiences until I am shown a place where they differ.
Which doesn't help you in other case at all.
It obviously does.
Well that's the point! IF IT IS SIMILAR! But what if it's not!? Than you will fail! And that means no algorithm works better than any other, or a random search, without prior information.
You're assuming its different. Why? If it's similar, it will help, if it's not, it will generate new information for the next time. In fact, this is how all information we have is generated, by taking one experience and applying it to the next. The first experience is almost always random (just watch a kid) and patterns emerge out of it as the kid learns.

This message is a reply to:
 Message 137 by Smooth Operator, posted 07-27-2009 4:02 PM Smooth Operator has replied

Replies to this message:
 Message 143 by Smooth Operator, posted 07-27-2009 4:53 PM Perdition has replied

Richard Townsend
Member (Idle past 4753 days)
Posts: 103
From: London, England
Joined: 07-16-2008


Message 141 of 315 (516837)
07-27-2009 4:25 PM
Reply to: Message 134 by Smooth Operator
07-27-2009 3:43 PM


This basicly means that even the evolutionary algorithms, which have no knowledge about what they are looking for in advance, will not be any better than a random chance. And since random chance doesn't create new information, neither does an evolutionary algorithm.
Thanks for explaining your thinking on this. I think you are misinterpreting the theorems. The theorems apply when considering a search across the space of ALL possible cost functions. They don't rule out more effective algorithms across narrower scopes than this.
See this.....
The No Free Lunch theorem has had considerable impact in the field of optimization research. A terse definition of this theorem is that no algorithm can outperform any other algorithm when performance is amortized over all functions. Once that theorem has been proven, the next logical step is to characterize how effective optimization can be under reasonable restrictions. We operationally define a technique for approaching the question of what makes a function searchable in practice. This technique involves defining a scalar field over the space of all functions that enables one to make decisive claims concerning the performance of an associated algorithm. We then demonstrate the effectiveness of this technique by giving such a field and a corresponding algorithm; the algorithm performs better than random search for small values of this field. We then show that this algorithm will be effective over many, perhaps most functions of interest to optimization researchers. We conclude with a discussion about how such regularities are exploited in many popular optimization algorithms."
Christensen and Oppacher (2001)
Secondly, you're wrong to say that random search can create no information. The search for your keys, for example, would create information about the location of your keys even if it were purely random.
In fact, randomness (as you know) is a key element in many evolutionary algorithms. It's not something we want to get rid of

This message is a reply to:
 Message 134 by Smooth Operator, posted 07-27-2009 3:43 PM Smooth Operator has replied

Replies to this message:
 Message 144 by Smooth Operator, posted 07-27-2009 4:57 PM Richard Townsend has replied

Richard Townsend
Member (Idle past 4753 days)
Posts: 103
From: London, England
Joined: 07-16-2008


Message 142 of 315 (516841)
07-27-2009 4:31 PM
Reply to: Message 138 by Smooth Operator
07-27-2009 4:09 PM


Yes it can since it's more than 400 bits. The whole of observable universe could not have created more than 400 bits since it's origin. If we were just a part of this material universe, with no non-material mind, we wouldn't be able to produce more than 500 bits. But we are!
I don't know much about the CSI concept. Does it have this non-material / non algorithmic element built into the definition of it?

This message is a reply to:
 Message 138 by Smooth Operator, posted 07-27-2009 4:09 PM Smooth Operator has not replied

Smooth Operator
Member (Idle past 5135 days)
Posts: 630
Joined: 07-24-2009


Message 143 of 315 (516845)
07-27-2009 4:53 PM
Reply to: Message 140 by Perdition
07-27-2009 4:12 PM


quote:
How do you know? Is it possible, that if you lost your keys on your nightside table, and a lot of other people put their keys on the nightside table, that maybe this other person put their keys on their nightside table?
It's possible but it isn't probable! And that's what we are talking about probabilities.
quote:
Even if they didn't, the knowledge you gained from your first search, can help you in your second.
For which house? Some other unknown house? No it can't.
quote:
ie, no keys on the ceiling, no keys in places to small for keys to fit. If you're truly operating from no prior knowledge in the first case, you would have to consider those possibilities the first time, but could rule them out the second.
Again, that information is gained by the search. So for the second search you already have some prior information. But if you used that method the first time on the second house you would not do any better.
quote:
It does if you assume the conditions are similar, and until you find they aren't, this is a good assumption to make.
Well that's an assumption that's not always going to work for you.
quote:
They don't need to be there 100%, they just need to be there more often than not.
Yes, they do, because than your algorithm is not better than some other in all cases.
quote:
No, it's not information, it's an assumption. I generally assume things are similar to previous experiences until I am shown a place where they differ.
Yes it is. If you modify the second search with some information and than do the search, it's called prior information.
quote:
It obviously does.
Nope.
quote:
You're assuming its different. Why?
Are you honestly telling me that ALL houses in the world are identical!?
quote:
If it's similar, it will help, if it's not, it will generate new information for the next time.
But if it isn't similr it won't help, that's the point.
quote:
In fact, this is how all information we have is generated, by taking one experience and applying it to the next. The first experience is almost always random (just watch a kid) and patterns emerge out of it as the kid learns.
Yes, becasue he extracts knowledge from his trial. But if you give him a totally unrelated problem, his method won't help him at all.

This message is a reply to:
 Message 140 by Perdition, posted 07-27-2009 4:12 PM Perdition has replied

Replies to this message:
 Message 145 by Perdition, posted 07-27-2009 5:13 PM Smooth Operator has replied

Smooth Operator
Member (Idle past 5135 days)
Posts: 630
Joined: 07-24-2009


Message 144 of 315 (516847)
07-27-2009 4:57 PM
Reply to: Message 141 by Richard Townsend
07-27-2009 4:25 PM


quote:
Thanks for explaining your thinking on this. I think you are misinterpreting the theorems. The theorems apply when considering a search across the space of ALL possible cost functions. They don't rule out more effective algorithms across narrower scopes than this.
That's obvious. But that means that this algorithm has been optimized for that kind of search.
quote:
Secondly, you're wrong to say that random search can create no information. The search for your keys, for example, would create information about the location of your keys even if it were purely random.
In fact, randomness (as you know) is a key element in many evolutionary algorithms. It's not something we want to get rid of
It didn't create information, you had to create it by finding the key. If you actually found you keys on first try every single time randomly by serching, now that would be creating information from nothing. The fact itself that you are searching means you have no information, so you have to create it by searching.
If you knew where the keys were, you wouldn't be searching in the first place, right?
quote:
I don't know much about the CSI concept. Does it have this non-material / non algorithmic element built into the definition of it?
No, it doesn't. I explained it a while back.

This message is a reply to:
 Message 141 by Richard Townsend, posted 07-27-2009 4:25 PM Richard Townsend has replied

Replies to this message:
 Message 147 by Richard Townsend, posted 07-27-2009 5:30 PM Smooth Operator has replied
 Message 188 by kongstad, posted 07-30-2009 9:22 AM Smooth Operator has replied

Perdition
Member (Idle past 3259 days)
Posts: 1593
From: Wisconsin
Joined: 05-15-2003


Message 145 of 315 (516851)
07-27-2009 5:13 PM
Reply to: Message 143 by Smooth Operator
07-27-2009 4:53 PM


It's possible but it isn't probable! And that's what we are talking about probabilities.
But you know what? Improbable things happen all the time, and the probability often depends on how one looks at it. Until you can provide a mathematical formula for this probability, then apply the formula to something specific, then show why the probability becomes zero (which it must, otherwise you're admitting it is possible for the thing to happen) you have nothing.
For which house? Some other unknown house? No it can't.
Yes it can. In fact, it often does. Show me how it can't. If you see that no keys are found on your celing, why would you look on the ceiling in another house? After looking at many houses, and finding that no keys are ever found on the celings in any house, doesn't that make it less probable that keys will be found on the celings of the next house? Doesn't this information ceom from the random first process, refined through subsequent iterations?
Again, that information is gained by the search.
Yes. So information is generated through the random search, go on...
So for the second search you already have some prior information. But if you used that method the first time on the second house you would not do any better.
Yes, so the first random search generated information you could apply to the second house. How can you say you wouldn't do any better? You can eliminate search options because of the first search, thus making it take less time to exhaust all possibilities in the second.
Well that's an assumption that's not always going to work for you.
It doesn't have to always work, it only has to work more often than not. And then when I find a new situation for which it doesn't work, the final solution gets factored into my new "search information."
Yes, they do, because than your algorithm is not better than some other in all cases.
Why do you think it has to be better in all cases? It only has to be better in most for it to be a worthwhile algorithm to use. There may be a better way in one instance, and in fact, we can often come up with better ways to design things in nature than the way they turned out because the process isn't perfect. That's my point.
Yes it is. If you modify the second search with some information and than do the search, it's called prior information.
Yes, but that prior information was generated by the first random search, and then gets incorporated. Thus, information can arise out of a random process. Once you get information, all you have to do is add to it.
Are you honestly telling me that ALL houses in the world are identical!?
No, but they're similar enough for a process created in one to be a benefit in another.
All car models are slightly different, but I don't have to learn how to drive each type of car individually. I can learn on one, and apply the knowledge from that to the others.
But if it isn't similr it won't help, that's the point.
No, in that case, it won't help in that one instance, but after that one instance, you've learned something more, and expanded the circumstances under which your process will now work. It adapts to a new environment you might say.
Yes, becasue he extracts knowledge from his trial. But if you give him a totally unrelated problem, his method won't help him at all.
Right, so he starts at square one again, and starts with nothing, then builds a process for all experiences that are similar to this new one. Given enough time, you'll experience enough different sets of circumstances to have a process in your repertoire to deal with just about any subsequent experiences.

This message is a reply to:
 Message 143 by Smooth Operator, posted 07-27-2009 4:53 PM Smooth Operator has replied

Replies to this message:
 Message 159 by Smooth Operator, posted 07-28-2009 3:29 PM Perdition has replied

Fallen
Member (Idle past 3894 days)
Posts: 38
Joined: 08-02-2007


Message 146 of 315 (516856)
07-27-2009 5:27 PM


A few questions about the nylon mutation(s), just out of curiousity:
Do we know the exact sequence of mutations that took place?
What did the current system evolve from?
Has anyone run the changes through the explanatory filter to see if they exhibit specified complexity?
In what way could the new system be considered "specified?" (ie, conform to an independently given pattern?)
Also, what definition of information is everyone using? Is a sequence of heads and tails information?

Beatus vir qui suffert tentationem
Quoniqm cum probates fuerit accipient coronam vitae

Replies to this message:
 Message 153 by Huntard, posted 07-28-2009 1:30 AM Fallen has not replied

Richard Townsend
Member (Idle past 4753 days)
Posts: 103
From: London, England
Joined: 07-16-2008


Message 147 of 315 (516857)
07-27-2009 5:30 PM
Reply to: Message 144 by Smooth Operator
07-27-2009 4:57 PM


It didn't create information, you had to create it by finding the key. If you actually found you keys on first try every single time randomly by serching, now that would be creating information from nothing. The fact itself that you are searching means you have no information, so you have to create it by searching.
Think this through. I'm saying that the search creates the information - clearly it does, because we know something at the end we didn't at the beginning. This meets the Shannon definition of information (decrease in uncertainty of a receiver). The same information is created no matter how we get there. You almost acknowledge that in your paragraph above - see last sentence.

This message is a reply to:
 Message 144 by Smooth Operator, posted 07-27-2009 4:57 PM Smooth Operator has replied

Replies to this message:
 Message 148 by Fallen, posted 07-27-2009 5:38 PM Richard Townsend has not replied
 Message 160 by Smooth Operator, posted 07-28-2009 3:30 PM Richard Townsend has not replied

Fallen
Member (Idle past 3894 days)
Posts: 38
Joined: 08-02-2007


Message 148 of 315 (516859)
07-27-2009 5:38 PM
Reply to: Message 147 by Richard Townsend
07-27-2009 5:30 PM


Richard Townsend writes:
I'm saying that the search creates the information - clearly it does, because we know something at the end we didn't at the beginning. This meets the Shannon definition of information (decrease in uncertainty of a receiver). The same information is created no matter how we get there.
So, using your definition of information, flipping a coin 100 times would create information, since it would reduce our uncertainty about the result of those 100 flips?

Beatus vir qui suffert tentationem
Quoniqm cum probates fuerit accipient coronam vitae

This message is a reply to:
 Message 147 by Richard Townsend, posted 07-27-2009 5:30 PM Richard Townsend has not replied

Wounded King
Member
Posts: 4149
From: Cincinnati, Ohio, USA
Joined: 04-09-2003


Message 149 of 315 (516863)
07-27-2009 6:10 PM
Reply to: Message 132 by Percy
07-27-2009 3:00 PM


What LexA does is control whether the genetic repair mechanism is enabled or not. When it is enabled then there are still mutations, just fewer of them.
Hi Percy,
I have to go with Smooth Operator on this one. In its uncleaved form LexA does repress the activity of certain DNA repair mechanisms. When it is cleaved these mechanisms become activated. However, along with DNA repair elements the cleavage also allows the expression/activation of a set of polymerases which are highly error prone, which is probably why Smooth Operator focuses on LexA cleavage as a mutation inducing mechanism instead.
You and Devil's advocate are right about the rates of mutation in the LexA mutant and therefore presumably in the presence of a LexA cleavage blocking drug, the authors (Cirz et al., 2005) state that ...
The second step mutation rate was 1.9 ( 0.21) 10−4 mutants/viable cell/d in the control strain and 5.5 ( 4.9) 10−7 mutants/viable cell/d in the lexA(S119A) strain (Figure S3). Assuming that the first and second step mutations are independent, the LexA mutant strain evolves resistance to 650 ng/ml ciprofloxacin in vitro with a rate that is approximately 104-fold lower than the control strain
So 104 fold less frequently is pretty substantial and certainly sufficient for the authors to state ...
LexA cleavage-mediated derepression of one or more genes is essential for the efficient evolution of resistance.
Of course in the long term evolution can afford to be inefficient but perhaps not in the face of a sudden environmental challenge such as the introduction of antibiotics.
TTFN,
WK

This message is a reply to:
 Message 132 by Percy, posted 07-27-2009 3:00 PM Percy has replied

Replies to this message:
 Message 150 by Percy, posted 07-27-2009 8:42 PM Wounded King has replied

Percy
Member
Posts: 22480
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.8


Message 150 of 315 (516876)
07-27-2009 8:42 PM
Reply to: Message 149 by Wounded King
07-27-2009 6:10 PM


Wounded King writes:
I have to go with Smooth Operator on this one. In its uncleaved form LexA does repress the activity of certain DNA repair mechanisms. When it is cleaved these mechanisms become activated. However, along with DNA repair elements the cleavage also allows the expression/activation of a set of polymerases which are highly error prone, which is probably why Smooth Operator focuses on LexA cleavage as a mutation inducing mechanism instead.
Whoa! That seemed a little weird, so I've read up on this a bit more, and I think I can make sense out of it if more details are added. Tell me if I've got this straight.
Uncleaved LexA represses the SOS response, the name given to a DNA repair system that operates after replication begins. Because uncleaved LexA is present in normal bacteria, the SOS repair response is repressed under most circumstances. It doesn't matter that this repair system is repressed when the bacteria isn't replicating, since there's nothing to repair.
Normal bacteria also possess the RecA protein, but it only plays a significant role during replication when (among other things) it cleaves the LexA repressor, thus enabling the SOS repair response, just when it is needed.
Some antibiotics work by inducing DNA damage in bacteria. Cause enough damage and the bacteria dies. But antibiotics can also somehow stimulate the RecA protein to cleave the LexA repressor, even though the bacteria is not replicating. The SOS repair response is no longer repressed, and it goes to work repairing the DNA damage caused by the antibiotic. This process of simultaneous destruction and repair produces many mutations. The possibility of all mutations become more likely, including those with resistance-conferring ability.
But no matter how close I've come to grasping the details, I don't think it changes the argument I was directing at Smooth Operator, which is what I think you were saying next. I wasn't trying to get to this level of detail because Smooth Operator's argument fails for much more basic reasons. Resistance-conferring mutations are selected from out of the random mutations that occur while under stress from antibiotics and are not specifically induced.
--Percy
Edited by Percy, : Add minor clarification.

This message is a reply to:
 Message 149 by Wounded King, posted 07-27-2009 6:10 PM Wounded King has replied

Replies to this message:
 Message 178 by Wounded King, posted 07-29-2009 11:12 AM Percy has seen this message but not replied

Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024