Register | Sign In


Understanding through Discussion


EvC Forum active members: 66 (9164 total)
2 online now:
Newest Member: ChatGPT
Post Volume: Total: 916,481 Year: 3,738/9,624 Month: 609/974 Week: 222/276 Day: 62/34 Hour: 1/4


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   What is an ID proponent's basis of comparison? (edited)
Parasomnium
Member
Posts: 2224
Joined: 07-15-2003


Message 166 of 315 (516991)
07-28-2009 5:04 PM
Reply to: Message 161 by Smooth Operator
07-28-2009 3:38 PM


Smooth Operator writes:
But genetic algortihms in a simulation were designed to find the specified target.
Of course not. The very reason genetic algorithms are utilized is that they are able to find unspecified solutions. If you had to specify the solution beforehand, it would be a pointless exercise, wouldn't it?

"Ignorance more frequently begets confidence than does knowledge: it is those who know little, not those who know much, who so positively assert that this or that problem will never be solved by science." - Charles Darwin.
Did you know that most of the time your computer is doing nothing? What if you could make it do something really useful? Like helping scientists understand diseases? Your computer could even be instrumental in finding a cure for HIV/AIDS. Wouldn't that be something? If you agree, then join World Community Grid now and download a simple, free tool that lets you and your computer do your share in helping humanity. After all, you are part of it, so why not take part in it?

This message is a reply to:
 Message 161 by Smooth Operator, posted 07-28-2009 3:38 PM Smooth Operator has replied

Replies to this message:
 Message 168 by Smooth Operator, posted 07-29-2009 2:08 AM Parasomnium has replied

Smooth Operator
Member (Idle past 5136 days)
Posts: 630
Joined: 07-24-2009


Message 167 of 315 (517030)
07-29-2009 2:07 AM
Reply to: Message 165 by Perdition
07-28-2009 4:52 PM


quote:
Almsot everything of a complex nature is improbable from the view of the end result. The fact of the matter is, any end result would have been equally improbable, so relying purely on probability is a losing game.
Well that's an obvious statement. And true at that. That is why I am not relying on probability alone. CSI is formed that it takes int account not only probability (complexity), but also specificity. So it's not just that you have to get a highly unprobable event to happen. But an exact highly unprobable event to happen. Which is not possible.
quote:
Ok, but then you're not modelling anything in reality, so I'm not sure what your model is trying to prove. Everything retains elements from previous iterations, and so will generate information based on those previous iterations that can be used in subsequent ones.
I know it would. But NFL states that if you do not use that generated knowledge your result will be the same on average. Which I think you agree.
quote:
I may have missed this, but can you please lay out the math that let's you determine this? I don't mean an explanation, I want the actual mathematical formula you used to conclude this.
It's all here, enjoy.
CiteSeerX — No free lunch theorems for optimization
quote:
Yeah, ok, so the fact that I have prior information and can come up with better solutions than nature shows that nature didn't have that prior informatrion, right? So, either the designer of nature lacked the information, didn't use the information, or nature happened on it's own. So, nature is either natural or the designer is inept.
No, it just means evolution can't get you what you think it can. It's that simple.
quote:
Why do you say this. As I've said before, this is an assertion. And even Dembski disagrees with you. He agrees that chance can create Specified information, or complex information. He only, for some inexplicable reason, stops at it creating CSI.
And I explained to you that by information i do mean CSI as Dembski does. Specified information and Complex information are just parts of what CSI really is. And I also explained why chance is not able to generate that.
quote:
Not really. It's merely using everything I have at my disposal, whihc includes and is limited to, the information I learned in the first house. The fact that the next house is similar, just means the information I learned in one is at least partially applicable to the next one.
Yes, and that's called a prior assumption about the next search.
quote:
I can apply some information from cars to flight. I was just in a plane on Saturday. The wheel in front of me turned the same way to make the plane turn the same way. The pedals in front of my feet were similar to the pedals in a car. The things they controlled were different, but I already knew how to use the pedals, if not what they were controlling, so yes, you can apply some information. The amount of information you can apply is directly proportional to the similarity of the situation. The more similar, the more information you can use.
But you can't use all of it. Which means that some of the information you will have to learn by new trials.
quote:
Yes, but that prior knowledge (or information) was generated through the initial random process and refined through the next, less random, process. It's how we build information in real life.
Yes, we agree on that, so isn't it now obvious to you that you have to go through all this trials to know what you are going to do in the next one to perform better? Isn't it clear to you now that you can't invent an algorithm that will work better than any other in the first try? You have to make all those trials to gain information to be able to construct the better algorithm. That's what NFL is all about. It's actually very simple logic.
quote:
But in eveolution, we have many, many trials. Each time a new organism is created (born, divided, etc) we have a trial because the next generation is never the same as the previous one. There is always something different.
But evolution has no knowledge of what it is searching for so every trial for evolution is like a first random trial.

This message is a reply to:
 Message 165 by Perdition, posted 07-28-2009 4:52 PM Perdition has not replied

Smooth Operator
Member (Idle past 5136 days)
Posts: 630
Joined: 07-24-2009


Message 168 of 315 (517031)
07-29-2009 2:08 AM
Reply to: Message 166 by Parasomnium
07-28-2009 5:04 PM


quote:
Of course not. The very reason genetic algorithms are utilized is that they are able to find unspecified solutions. If you had to specify the solution beforehand, it would be a pointless exercise, wouldn't it?
It's specified by constraints on the search landscape. The rogrammer makes constraints so the search algorithm will give him desired results.

This message is a reply to:
 Message 166 by Parasomnium, posted 07-28-2009 5:04 PM Parasomnium has replied

Replies to this message:
 Message 169 by Parasomnium, posted 07-29-2009 3:42 AM Smooth Operator has replied

Parasomnium
Member
Posts: 2224
Joined: 07-15-2003


Message 169 of 315 (517036)
07-29-2009 3:42 AM
Reply to: Message 168 by Smooth Operator
07-29-2009 2:08 AM


Constraints
Smooth Operator writes:
It's specified by constraints on the search landscape. The rogrammer makes constraints so the search algorithm will give him desired results.
Do you mean by this that if the researcher sets up an evolutionary process to evolve, say, designs for electronic oscillators - so the constraint would be "make me an oscillator" - we should not expect it to evolve, for example, a radio receiver, correct?

"Ignorance more frequently begets confidence than does knowledge: it is those who know little, not those who know much, who so positively assert that this or that problem will never be solved by science." - Charles Darwin.
Did you know that most of the time your computer is doing nothing? What if you could make it do something really useful? Like helping scientists understand diseases? Your computer could even be instrumental in finding a cure for HIV/AIDS. Wouldn't that be something? If you agree, then join World Community Grid now and download a simple, free tool that lets you and your computer do your share in helping humanity. After all, you are part of it, so why not take part in it?

This message is a reply to:
 Message 168 by Smooth Operator, posted 07-29-2009 2:08 AM Smooth Operator has replied

Replies to this message:
 Message 180 by Smooth Operator, posted 07-30-2009 4:04 AM Parasomnium has replied

Percy
Member
Posts: 22480
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.8


Message 170 of 315 (517038)
07-29-2009 4:00 AM
Reply to: Message 161 by Smooth Operator
07-28-2009 3:38 PM


Smooth Operator writes:
Because ALL algorithms work like that. Even the simple calculators. Can they give you a number that was not programmed into them? No, obviously not.
You've been misinformed. Here's a simple C++ program that multiplies two integers. There are no numbers programmed into it:
// Simple multiply program

#include 
#include 

using namespace std;

int main(int argc, char** argv) {

    string strA, strB;
    int intA, intB;

    cout << "Multiply two numbers" << endl;

    cout << "Enter number 1: ";
    cin >> strA;
    intA = strtol(strA.c_str(), NULL, 10);

    cout << "Enter number 2: ";
    cin >> strB;
    intB = strtol(strB.c_str(), NULL, 10);

    cout << "Result: " << strA << "*" << strB << " = " << intA*intB << endl;
}
If you have access to a C++ compiler then give it a try - it works, and as you can see, no numbers are pre-programmed in.
A genetic algorithm models evolution, just as meteorological programs model the weather, or NASA programs model the trajectories of spacecraft. The answers are not already programmed into these programs. What would be the point of writing a program to find an answer you already know?
Numbers are not programmed into simple calculators, either. Do you really believe that somewhere in your calculator is a "2" times table for all the possible numbers you can multiply by "2" and the answers, and a "3" times table for all the possible numbers you can multiply by "3" and the answers, and so on? Calculators and computers today use ALUs (Arithmetic Logic Units) that at their heart are just gates and flops implementing complex functions like multiplication from simpler functions like full adders. (Just for completeness I'll mention that there are tables of numbers involved for the proper representation and manipulation of certain standards, like the IEEE standard for fixed and floating point values.)
But genetic algortihms in a simulation were designed to find the specified target.
What would be the point of writing a program to find a solution you already know? The target of genetic algorithms is not specific. The solution is not known in advance, just as you presumably don't know in advance the product of two numbers you enter to the multiply program. Genetic algorithms are seeking a solution in the design space that satisfies specified parameters. They are a very effective method of exploring very large design spaces that couldn't be successfully explored using more random permutational techniques.
Evolution in real life has no knowledge about what it is looking for?
Yes, just like the genetic algorithms that model evolution. There's a set of parameters evolution seeks to satisfy that in the aggregate are equivalent to survival to reproduce, but it has no specific goal.
Are you saying that random mutations are able to produce Shannon information? Yes that is true. But not CSI.
CSI is just a concept made up by William Dembski. I can tell you how much information is in a stretch of DNA. If CSI had any reality then you could tell me how much CSI was in the same stretch, but you can't.
If CSI were real then ID scientists around the globe would be making new discoveries every year based upon the CSI concept, improving and extending our knowledge of our world and universe. Advances in the development of new drugs would be carried out by scientists applying the principles of CSI instead of evolution. The next generation of scientists would be flooding to Bible colleges and the Discovery Institute so they'd have the best chance of winning the Nobel Prize. And William Dembski would himself receive the Nobel Prize, be knighted by the queen, and receive world-wide approbation.
Instead Dembski is a professor at Southwestern Baptist Theological Seminary in Fort Worth, Texas, where he teaches courses in its Department of Philosophy of Religion, and CSI has no standing within the scientific community whatsoever because in truth it is just a prop invented to give a scientific-looking veneer to what at heart is just the religious concept of special creation by God.
Actually we are talking about the same thing, only in different terminology.
I don't think so. Your position is resistance-conferring mutations are a deterministic result of the presence of antibiotics. My position is that resistance-conferring mutations are the ones selected from the millions of mutations that actually occur.
Than why doesn't my article say that LexA has to be turned on for resistance to be acquired?
Do you mean oi/10.1371/journal.pbio.0030176]-->Inhibition of Mutation and Combating the Evolution of Antibiotic Resistance? I don't see anywhere in the paper where it refers to LexA turning on and off. It talks about LexA derepressing the SOS response mechanism when cleaved. LexA turning on and off is terminology you invented yourself.
--Percy

This message is a reply to:
 Message 161 by Smooth Operator, posted 07-28-2009 3:38 PM Smooth Operator has replied

Replies to this message:
 Message 174 by PaulK, posted 07-29-2009 7:50 AM Percy has replied
 Message 181 by Smooth Operator, posted 07-30-2009 4:33 AM Percy has replied

Percy
Member
Posts: 22480
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.8


Message 171 of 315 (517041)
07-29-2009 4:33 AM
Reply to: Message 162 by Smooth Operator
07-28-2009 3:45 PM


Smooth Operator writes:
quote:
And what two algorithms are you comparing in the bacteria?
A random search and an evolutionary algorithm.
Evolution already performs a random search because mutations are random. How is your random search different from evolution?
You are wrong because youa re using the wrong definition of information.
I'm using Shannon information.
All of the necessary information was already there.
You think the information for allele D was already there? Where was it then?
The reason you can't answer that question is because allele D was caused by a random change (a mutation) to one or more nucleotides of allele A, B or C. It didn't exist before the mutation occurred. It appeared out of thin air, created by random chance.
You can't use Shannon's information and apply it to biological information because it only concerns itslef with statistical aspect of information. It still has to take into account syntax and semantics.
Shannon information can be applied to anything in the real world, including DNA. In evolution the information problem is one of how to reliably communicate the specific set of messages contained in the DNA to the next generation. All the alleles of all the genes of a population form the complete message set, and each individual in the population possesses a specific subset of that message set that it needs to communicate to offspring during reproduction. Any errors in communication of this DNA message to offspring are retained by the offspring and become part of the population's collective genome, making the message set larger and increasing the amount of information.
Semantics are irrelevant in information theory.
--Percy
Edited by Percy, : Decided not to comment about SM's next to last paragraph.

This message is a reply to:
 Message 162 by Smooth Operator, posted 07-28-2009 3:45 PM Smooth Operator has replied

Replies to this message:
 Message 182 by Smooth Operator, posted 07-30-2009 4:39 AM Percy has replied

Rrhain
Member
Posts: 6351
From: San Diego, CA, USA
Joined: 05-03-2003


Message 172 of 315 (517043)
07-29-2009 6:17 AM
Reply to: Message 157 by traderdrew
07-28-2009 9:29 AM


traderdrew responds to me:
quote:
I have seen the pilus model for evolving a flagellum but it conceals at least one "major" problem plus other problems described by Michael Dembski.
No, it doesn't. As was shown in the Dover trial, these claims of "problems" and "irreducibility" have been shown to be false. Not only are the structures reducible, we've actually found the evolutionary pathways by which they appeared.
Behe would know this if he ever bothered to do a survey of the literature before making his proclamations that such things have "never been studied," but he doesn't.
quote:
So you expect me to go through the transcripts of the Dover trial in order to refute or agree with you?
Sorta. That is, what I expect is for you to do your homework before making delcarations about what has or has not been discovered. Evolutionary biology is a huge field and discoveries are being made all the time. If you haven't bothered to look into the publications, read the journals, done the research, any claims that there are "problems" or that something "cannot be explained" are foolish at best.
quote:
So what you are saying from your chemistry lesson is that biochemistry works with what it is there.
Is there something special about being inside a phospholipid bilayer that makes chemistry behave differently?
quote:
But you see with protein bonding, oil and hydrogen bonds have to arranged in sequence with their counterparts in the other proteins as well as having the correct shapes.
So? Are you saying that there is something going on inside the cell that isn't chemistry? There is something about being wrapped inside a phospholipid bilayer that changes the valence on oxygen?

Rrhain

Thank you for your submission to Science. Your paper was reviewed by a jury of seventh graders so that they could look for balance and to allow them to make up their own minds. We are sorry to say that they found your paper "bogus," specifically describing the section on the laboratory work "boring." We regret that we will be unable to publish your work at this time.

This message is a reply to:
 Message 157 by traderdrew, posted 07-28-2009 9:29 AM traderdrew has replied

Replies to this message:
 Message 177 by traderdrew, posted 07-29-2009 10:59 AM Rrhain has replied

Rrhain
Member
Posts: 6351
From: San Diego, CA, USA
Joined: 05-03-2003


Message 173 of 315 (517045)
07-29-2009 6:27 AM
Reply to: Message 163 by Smooth Operator
07-28-2009 3:48 PM


Smooth Operator responds to me:
quote:
No, becasue that information was copied.
Irrelevant. Whether or not the information was copied has nothing to do with why some of the bacteria live and some of the bacteria die despite all the bacteria being descended from a single ancestor.
By your claim that no new information can ever be created, all the bacteria necessarily behave in exactly the same way. If one die, then all die. If one lives, then all live. No exceptions.
But we see exceptions. Some of the bacteria live and some die.
Thus, our premise must be false: New information necessarily was created.
quote:
Resistance is acquired by loss of information.
Incorrect. Because you can rerun this experiment by taking one of the K-4 bacteria and letting it be the sole ancestor to the lawn. When you re-infect the lawn with T4 phage, we find not that the lawn survives but rather that the lawn starts to die.
Now, this time it's the phage that has mutated to T4h.
We can keep this up, having the two continually mutate to change to the new environemtn. By your logic, we should eventually wind up with nothing as all that "information" gets lost. But it doesn't. The bacteria and phage keep surviving, keep changing.
How can they do that if they keep "losing information"?
Some simple questions:
Which has more information: A or AB?
Which has more information: A or B?
Which has more information: A or AA?

Rrhain

Thank you for your submission to Science. Your paper was reviewed by a jury of seventh graders so that they could look for balance and to allow them to make up their own minds. We are sorry to say that they found your paper "bogus," specifically describing the section on the laboratory work "boring." We regret that we will be unable to publish your work at this time.

This message is a reply to:
 Message 163 by Smooth Operator, posted 07-28-2009 3:48 PM Smooth Operator has replied

Replies to this message:
 Message 183 by Smooth Operator, posted 07-30-2009 4:44 AM Rrhain has replied

PaulK
Member
Posts: 17825
Joined: 01-10-2003
Member Rating: 2.2


Message 174 of 315 (517049)
07-29-2009 7:50 AM
Reply to: Message 170 by Percy
07-29-2009 4:00 AM


Three failures of CSI
There are many things wrong with Dembski's CSI concept. here are three important examples.
1) It was intended to formalise the way that humans recognsie design. It doesn't. By relying on purely negative argumentation - "design" is even defined negatively - it ignores the fact that we work with ideas of what designers do, and more importantly how designs are implemented. Although not a fatal flaw in the method, we should recognise that it is less than it was meant to be - and also that ID proponents who attempt to co-opt all instances of design detection as uses of CSI are wrong to do so.
2) It is impractical to use in many cases - including the very cases where ID proponents would like to use it. (Dembski has even complained about it, although for some reason blaming his opponents rather than himself.) For this reason alone CSI has little real significance to the discussion of ID versus evolution - except, perhaps, as an example of ID's failures.
3) The constraint imposed by specification is too loose. Dembski treats a specification constructed after the fact - knowing and using the outcome to produce the specification - to be the same as a prediction made in advance. But this is not the case. There is stll an element of "painting the targets around the bullet holes". There may be many other results which would also be found to be "designed" - and Dembski's methodology ignores this.
This is actually a serious problem - for any non-trivial specification the probability calculated will be too low, and cannot be validly compared to the probability bound. Even if the method were revised to take this into account the new method would be even less practical.
In short CSI is overhyped, almost completely useless and still vulnerable to false positives.

This message is a reply to:
 Message 170 by Percy, posted 07-29-2009 4:00 AM Percy has replied

Replies to this message:
 Message 175 by Percy, posted 07-29-2009 9:10 AM PaulK has replied
 Message 184 by Smooth Operator, posted 07-30-2009 4:49 AM PaulK has replied

Percy
Member
Posts: 22480
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.8


Message 175 of 315 (517054)
07-29-2009 9:10 AM
Reply to: Message 174 by PaulK
07-29-2009 7:50 AM


Re: Three failures of CSI
Thanks for the detailed critique!
Sometime I feel that giving CSI and Dembski's other ideas this kind of serious attention dignifies it far beyond what it deserves. Dembski has draped CSI in mathematical trappings, but it is in essence just a made up idea (and an unoriginal one at that) constructed with no testing against reality. In his books he never presents any actual research data, he makes things up (law of conservation of information, the probability bound, inclusion of semantics as a facet of information theory), he never shows how CSI can actually be calculated (anyone know what the units of CSI are?), and he never points to any successful predictions.
--Percy

This message is a reply to:
 Message 174 by PaulK, posted 07-29-2009 7:50 AM PaulK has replied

Replies to this message:
 Message 176 by PaulK, posted 07-29-2009 9:36 AM Percy has seen this message but not replied

PaulK
Member
Posts: 17825
Joined: 01-10-2003
Member Rating: 2.2


Message 176 of 315 (517057)
07-29-2009 9:36 AM
Reply to: Message 175 by Percy
07-29-2009 9:10 AM


Re: Three failures of CSI
CSI is binary. Either you are over the probability bound or you aren't.
Dembski measures information in terms of "bits" - which is an improbability measure (-log2 p(x) where p(x) is the probability).
A probability of 0.5 is 1 bit, 0.25 is 2 bits, 0.125 is 3 bits etc.
Really, the basic method is not at all original. Show that all the alternative explanations are too improbable to be worth considering and accept the one you have left. (But there are huge problems in the details - for instance if there are two possible pathways to a result, when do you consider them to be different explanations and when do you lump them together as one ?)
I do have The Design Inference which explains the method, but I have to say that it is very badly written. If I had less of a mathematical background I'm not sure I could have worked it out correctly (it's unclear enough that an idea of what it SHOULD say is very helpful !). It also has the justification of his "Universal Probability Bound", too although I find the argument to be less than entirely convincing. The bound is low enough, though, that I can't call it a fatal problem (but I doubt whether it can be considered a significant contribution either).
I found the book remaindered at a major bookstore in town, which is the only reason I bought it. And it's only worth it so that I can counter arguments based on Dembski's claims.

This message is a reply to:
 Message 175 by Percy, posted 07-29-2009 9:10 AM Percy has seen this message but not replied

traderdrew
Member (Idle past 5176 days)
Posts: 379
From: Palm Beach, Florida
Joined: 04-27-2009


Message 177 of 315 (517068)
07-29-2009 10:59 AM
Reply to: Message 172 by Rrhain
07-29-2009 6:17 AM


Behe would know this if he ever bothered to do a survey of the literature before making his proclamations that such things have "never been studied," but he doesn't.
I see a lot of huffing and puffing from you about Behe but no real evidence so far. Believe it or not, I have read parts of books on evolution at my local B&Ns and Borders. Authors have included Jerry Coyne, Kenneth Miller and Richard Dawkins. Jerry Coyne seems to be the most rational of that bunch. I have not come across anything that convinces me to drop ID after I read the counterarguments from Behe or others. An example would be the mounting amount of evidence that seems to strongly suggest that the TTSS devolved from the flagellum.
The reason why I was skeptical that your statement of whoever it was who threw down the publications or journals that refute Behe's irreducible complex arguments, is that is a friggin courtroom and I don't know who you are. I don't know much time everyone had to examine the evidence in court or how much evidence was provided by both sides. It would greatly strengthen your argument if you could provide me a link of something that I can examine. I really have some studying up to do. So I would rather see some evidence rather than participate in a dragged on useless debate.

This message is a reply to:
 Message 172 by Rrhain, posted 07-29-2009 6:17 AM Rrhain has replied

Replies to this message:
 Message 179 by Wounded King, posted 07-29-2009 11:24 AM traderdrew has not replied
 Message 203 by Rrhain, posted 08-02-2009 4:43 AM traderdrew has replied

Wounded King
Member
Posts: 4149
From: Cincinnati, Ohio, USA
Joined: 04-09-2003


Message 178 of 315 (517071)
07-29-2009 11:12 AM
Reply to: Message 150 by Percy
07-27-2009 8:42 PM


Some antibiotics work by inducing DNA damage in bacteria. Cause enough damage and the bacteria dies. But antibiotics can also somehow stimulate the RecA protein to cleave the LexA repressor, even though the bacteria is not replicating.
It isn't the replication of the bacteria necessarily that is the important issue but of the DNA. The way fluoroquinolones work is to block the action of a set of enzymes which allow changes to the coiling of the circular DNA found in bacteria (these enzymes are called gyrases as they cut the DNA and allow it to gyrate).
The SOS response is not specific to replication but is started by the binding of the RecA protein to single stranded DNA, which catalyses LexA auto-cleavage. Single stranded DNA normally only occurs either during replication, usually when something is blocking the normal progress of replication, or when the DNA is significantly damaged. One of the effects the cleavage of LexA has is that the increased levels of the error prone polymerases allows them to act on undamaged DNA which does not happen during the normal course of things.
The possibility of all mutations become more likely, including those with resistance-conferring ability.
This isn't necessarily the case, there may be structural elements either sequence or higher level in certain regions of DNA that make mutations more or less probable or the frequency of different types of mutation may change. So the higher induced mutation rate need not be simply an increased level of the mutation rate seen in the absence of the SOS response, it may be qualitatively different as well.
In terms of LexA and E. coli I don't know that there is any evidence for this. RecA recruits error prone polymerases to sites where single stranded DNA is available, but I don't see that this would favour specific mutations in genes allowing the evolution of resistance, unless they were more susceptible to damage than other regions. One interesting possibility is that since higher levels of transcriptional activation are associated with higher levels of mutation if gyrase encoding genes were induced downstream of the SOS response we would have a mechanism which would preferentially target a subset of genes including the gyrases.
Sadly, looking at literature on the targets upregulated when LexA is cleaved I see no sign of the gyrases (Khanin et al., 2006). Intriguingly however the transcription of gyrases is upregulated when gyrase activity is blocked (Menzel and Gellert, 1983; Franco and Drlica, 1989).
It is also interesting to note that the different subunits A and B of the gyrase enzyme have different transcriptional responses to change in supercoiling with A being strongly upregulted and B having no noticeable response (Neumann and Quiones, 1997). This could be significant given that the largest proportion of resistant mutants that have been isolated for Ciprofloxacin have been in the gene coding for the GyrA subunit(Morgan-Linnel et al., 2008).
TTFN,
WK

This message is a reply to:
 Message 150 by Percy, posted 07-27-2009 8:42 PM Percy has seen this message but not replied

Wounded King
Member
Posts: 4149
From: Cincinnati, Ohio, USA
Joined: 04-09-2003


Message 179 of 315 (517073)
07-29-2009 11:24 AM
Reply to: Message 177 by traderdrew
07-29-2009 10:59 AM


Wikipedia has an extensive page covering the Dover case. This includes the judges final decision and transcripts of the majority of the proceedings.
There is quite a lot of Behe testimony, I think the part that is being focused on is during the afternoon of day 12.
TTFN,
WK
Edited by Wounded King, : No reason given.

This message is a reply to:
 Message 177 by traderdrew, posted 07-29-2009 10:59 AM traderdrew has not replied

Smooth Operator
Member (Idle past 5136 days)
Posts: 630
Joined: 07-24-2009


Message 180 of 315 (517175)
07-30-2009 4:04 AM
Reply to: Message 169 by Parasomnium
07-29-2009 3:42 AM


Re: Constraints
quote:
Do you mean by this that if the researcher sets up an evolutionary process to evolve, say, designs for electronic oscillators - so the constraint would be "make me an oscillator" - we should not expect it to evolve, for example, a radio receiver, correct?
Yes, something like that. You will get a kind of oscillator that the computer optimizes for you.

This message is a reply to:
 Message 169 by Parasomnium, posted 07-29-2009 3:42 AM Parasomnium has replied

Replies to this message:
 Message 186 by Parasomnium, posted 07-30-2009 6:55 AM Smooth Operator has replied

Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024