|
Register | Sign In |
|
QuickSearch
EvC Forum active members: 66 (9164 total) |
| |
ChatGPT | |
Total: 916,481 Year: 3,738/9,624 Month: 609/974 Week: 222/276 Day: 62/34 Hour: 1/4 |
Thread ▼ Details |
|
Thread Info
|
|
|
Author | Topic: What is an ID proponent's basis of comparison? (edited) | |||||||||||||||||||||||
Parasomnium Member Posts: 2224 Joined: |
Smooth Operator writes: But genetic algortihms in a simulation were designed to find the specified target. Of course not. The very reason genetic algorithms are utilized is that they are able to find unspecified solutions. If you had to specify the solution beforehand, it would be a pointless exercise, wouldn't it? "Ignorance more frequently begets confidence than does knowledge: it is those who know little, not those who know much, who so positively assert that this or that problem will never be solved by science." - Charles Darwin. Did you know that most of the time your computer is doing nothing? What if you could make it do something really useful? Like helping scientists understand diseases? Your computer could even be instrumental in finding a cure for HIV/AIDS. Wouldn't that be something? If you agree, then join World Community Grid now and download a simple, free tool that lets you and your computer do your share in helping humanity. After all, you are part of it, so why not take part in it?
|
|||||||||||||||||||||||
Smooth Operator Member (Idle past 5136 days) Posts: 630 Joined: |
quote:Well that's an obvious statement. And true at that. That is why I am not relying on probability alone. CSI is formed that it takes int account not only probability (complexity), but also specificity. So it's not just that you have to get a highly unprobable event to happen. But an exact highly unprobable event to happen. Which is not possible. quote:I know it would. But NFL states that if you do not use that generated knowledge your result will be the same on average. Which I think you agree. quote:It's all here, enjoy. CiteSeerX — No free lunch theorems for optimization
quote:No, it just means evolution can't get you what you think it can. It's that simple. quote:And I explained to you that by information i do mean CSI as Dembski does. Specified information and Complex information are just parts of what CSI really is. And I also explained why chance is not able to generate that. quote:Yes, and that's called a prior assumption about the next search. quote:But you can't use all of it. Which means that some of the information you will have to learn by new trials. quote:Yes, we agree on that, so isn't it now obvious to you that you have to go through all this trials to know what you are going to do in the next one to perform better? Isn't it clear to you now that you can't invent an algorithm that will work better than any other in the first try? You have to make all those trials to gain information to be able to construct the better algorithm. That's what NFL is all about. It's actually very simple logic. quote:But evolution has no knowledge of what it is searching for so every trial for evolution is like a first random trial.
|
|||||||||||||||||||||||
Smooth Operator Member (Idle past 5136 days) Posts: 630 Joined: |
quote:It's specified by constraints on the search landscape. The rogrammer makes constraints so the search algorithm will give him desired results.
|
|||||||||||||||||||||||
Parasomnium Member Posts: 2224 Joined: |
Smooth Operator writes: It's specified by constraints on the search landscape. The rogrammer makes constraints so the search algorithm will give him desired results. Do you mean by this that if the researcher sets up an evolutionary process to evolve, say, designs for electronic oscillators - so the constraint would be "make me an oscillator" - we should not expect it to evolve, for example, a radio receiver, correct? "Ignorance more frequently begets confidence than does knowledge: it is those who know little, not those who know much, who so positively assert that this or that problem will never be solved by science." - Charles Darwin. Did you know that most of the time your computer is doing nothing? What if you could make it do something really useful? Like helping scientists understand diseases? Your computer could even be instrumental in finding a cure for HIV/AIDS. Wouldn't that be something? If you agree, then join World Community Grid now and download a simple, free tool that lets you and your computer do your share in helping humanity. After all, you are part of it, so why not take part in it?
|
|||||||||||||||||||||||
Percy Member Posts: 22480 From: New Hampshire Joined: Member Rating: 4.8 |
Smooth Operator writes: Because ALL algorithms work like that. Even the simple calculators. Can they give you a number that was not programmed into them? No, obviously not. You've been misinformed. Here's a simple C++ program that multiplies two integers. There are no numbers programmed into it:
// Simple multiply program #include #include If you have access to a C++ compiler then give it a try - it works, and as you can see, no numbers are pre-programmed in. A genetic algorithm models evolution, just as meteorological programs model the weather, or NASA programs model the trajectories of spacecraft. The answers are not already programmed into these programs. What would be the point of writing a program to find an answer you already know? Numbers are not programmed into simple calculators, either. Do you really believe that somewhere in your calculator is a "2" times table for all the possible numbers you can multiply by "2" and the answers, and a "3" times table for all the possible numbers you can multiply by "3" and the answers, and so on? Calculators and computers today use ALUs (Arithmetic Logic Units) that at their heart are just gates and flops implementing complex functions like multiplication from simpler functions like full adders. (Just for completeness I'll mention that there are tables of numbers involved for the proper representation and manipulation of certain standards, like the IEEE standard for fixed and floating point values.)
But genetic algortihms in a simulation were designed to find the specified target. What would be the point of writing a program to find a solution you already know? The target of genetic algorithms is not specific. The solution is not known in advance, just as you presumably don't know in advance the product of two numbers you enter to the multiply program. Genetic algorithms are seeking a solution in the design space that satisfies specified parameters. They are a very effective method of exploring very large design spaces that couldn't be successfully explored using more random permutational techniques.
Evolution in real life has no knowledge about what it is looking for? Yes, just like the genetic algorithms that model evolution. There's a set of parameters evolution seeks to satisfy that in the aggregate are equivalent to survival to reproduce, but it has no specific goal.
Are you saying that random mutations are able to produce Shannon information? Yes that is true. But not CSI. CSI is just a concept made up by William Dembski. I can tell you how much information is in a stretch of DNA. If CSI had any reality then you could tell me how much CSI was in the same stretch, but you can't. If CSI were real then ID scientists around the globe would be making new discoveries every year based upon the CSI concept, improving and extending our knowledge of our world and universe. Advances in the development of new drugs would be carried out by scientists applying the principles of CSI instead of evolution. The next generation of scientists would be flooding to Bible colleges and the Discovery Institute so they'd have the best chance of winning the Nobel Prize. And William Dembski would himself receive the Nobel Prize, be knighted by the queen, and receive world-wide approbation. Instead Dembski is a professor at Southwestern Baptist Theological Seminary in Fort Worth, Texas, where he teaches courses in its Department of Philosophy of Religion, and CSI has no standing within the scientific community whatsoever because in truth it is just a prop invented to give a scientific-looking veneer to what at heart is just the religious concept of special creation by God.
Actually we are talking about the same thing, only in different terminology. I don't think so. Your position is resistance-conferring mutations are a deterministic result of the presence of antibiotics. My position is that resistance-conferring mutations are the ones selected from the millions of mutations that actually occur.
Than why doesn't my article say that LexA has to be turned on for resistance to be acquired? Do you mean oi/10.1371/journal.pbio.0030176]-->Inhibition of Mutation and Combating the Evolution of Antibiotic Resistance? I don't see anywhere in the paper where it refers to LexA turning on and off. It talks about LexA derepressing the SOS response mechanism when cleaved. LexA turning on and off is terminology you invented yourself. --Percy
|
|||||||||||||||||||||||
Percy Member Posts: 22480 From: New Hampshire Joined: Member Rating: 4.8 |
Smooth Operator writes: quote:A random search and an evolutionary algorithm. Evolution already performs a random search because mutations are random. How is your random search different from evolution?
You are wrong because youa re using the wrong definition of information. I'm using Shannon information.
All of the necessary information was already there. You think the information for allele D was already there? Where was it then? The reason you can't answer that question is because allele D was caused by a random change (a mutation) to one or more nucleotides of allele A, B or C. It didn't exist before the mutation occurred. It appeared out of thin air, created by random chance.
You can't use Shannon's information and apply it to biological information because it only concerns itslef with statistical aspect of information. It still has to take into account syntax and semantics. Shannon information can be applied to anything in the real world, including DNA. In evolution the information problem is one of how to reliably communicate the specific set of messages contained in the DNA to the next generation. All the alleles of all the genes of a population form the complete message set, and each individual in the population possesses a specific subset of that message set that it needs to communicate to offspring during reproduction. Any errors in communication of this DNA message to offspring are retained by the offspring and become part of the population's collective genome, making the message set larger and increasing the amount of information. Semantics are irrelevant in information theory. --Percy Edited by Percy, : Decided not to comment about SM's next to last paragraph.
|
|||||||||||||||||||||||
Rrhain Member Posts: 6351 From: San Diego, CA, USA Joined: |
traderdrew responds to me:
quote: No, it doesn't. As was shown in the Dover trial, these claims of "problems" and "irreducibility" have been shown to be false. Not only are the structures reducible, we've actually found the evolutionary pathways by which they appeared. Behe would know this if he ever bothered to do a survey of the literature before making his proclamations that such things have "never been studied," but he doesn't.
quote: Sorta. That is, what I expect is for you to do your homework before making delcarations about what has or has not been discovered. Evolutionary biology is a huge field and discoveries are being made all the time. If you haven't bothered to look into the publications, read the journals, done the research, any claims that there are "problems" or that something "cannot be explained" are foolish at best.
quote: Is there something special about being inside a phospholipid bilayer that makes chemistry behave differently?
quote: So? Are you saying that there is something going on inside the cell that isn't chemistry? There is something about being wrapped inside a phospholipid bilayer that changes the valence on oxygen? Rrhain Thank you for your submission to Science. Your paper was reviewed by a jury of seventh graders so that they could look for balance and to allow them to make up their own minds. We are sorry to say that they found your paper "bogus," specifically describing the section on the laboratory work "boring." We regret that we will be unable to publish your work at this time.
|
|||||||||||||||||||||||
Rrhain Member Posts: 6351 From: San Diego, CA, USA Joined: |
Smooth Operator responds to me:
quote: Irrelevant. Whether or not the information was copied has nothing to do with why some of the bacteria live and some of the bacteria die despite all the bacteria being descended from a single ancestor. By your claim that no new information can ever be created, all the bacteria necessarily behave in exactly the same way. If one die, then all die. If one lives, then all live. No exceptions. But we see exceptions. Some of the bacteria live and some die. Thus, our premise must be false: New information necessarily was created.
quote: Incorrect. Because you can rerun this experiment by taking one of the K-4 bacteria and letting it be the sole ancestor to the lawn. When you re-infect the lawn with T4 phage, we find not that the lawn survives but rather that the lawn starts to die. Now, this time it's the phage that has mutated to T4h. We can keep this up, having the two continually mutate to change to the new environemtn. By your logic, we should eventually wind up with nothing as all that "information" gets lost. But it doesn't. The bacteria and phage keep surviving, keep changing. How can they do that if they keep "losing information"? Some simple questions: Which has more information: A or AB?Which has more information: A or B? Which has more information: A or AA? Rrhain Thank you for your submission to Science. Your paper was reviewed by a jury of seventh graders so that they could look for balance and to allow them to make up their own minds. We are sorry to say that they found your paper "bogus," specifically describing the section on the laboratory work "boring." We regret that we will be unable to publish your work at this time.
|
|||||||||||||||||||||||
PaulK Member Posts: 17825 Joined: Member Rating: 2.2 |
There are many things wrong with Dembski's CSI concept. here are three important examples.
1) It was intended to formalise the way that humans recognsie design. It doesn't. By relying on purely negative argumentation - "design" is even defined negatively - it ignores the fact that we work with ideas of what designers do, and more importantly how designs are implemented. Although not a fatal flaw in the method, we should recognise that it is less than it was meant to be - and also that ID proponents who attempt to co-opt all instances of design detection as uses of CSI are wrong to do so. 2) It is impractical to use in many cases - including the very cases where ID proponents would like to use it. (Dembski has even complained about it, although for some reason blaming his opponents rather than himself.) For this reason alone CSI has little real significance to the discussion of ID versus evolution - except, perhaps, as an example of ID's failures. 3) The constraint imposed by specification is too loose. Dembski treats a specification constructed after the fact - knowing and using the outcome to produce the specification - to be the same as a prediction made in advance. But this is not the case. There is stll an element of "painting the targets around the bullet holes". There may be many other results which would also be found to be "designed" - and Dembski's methodology ignores this.This is actually a serious problem - for any non-trivial specification the probability calculated will be too low, and cannot be validly compared to the probability bound. Even if the method were revised to take this into account the new method would be even less practical. In short CSI is overhyped, almost completely useless and still vulnerable to false positives.
|
|||||||||||||||||||||||
Percy Member Posts: 22480 From: New Hampshire Joined: Member Rating: 4.8 |
Thanks for the detailed critique!
Sometime I feel that giving CSI and Dembski's other ideas this kind of serious attention dignifies it far beyond what it deserves. Dembski has draped CSI in mathematical trappings, but it is in essence just a made up idea (and an unoriginal one at that) constructed with no testing against reality. In his books he never presents any actual research data, he makes things up (law of conservation of information, the probability bound, inclusion of semantics as a facet of information theory), he never shows how CSI can actually be calculated (anyone know what the units of CSI are?), and he never points to any successful predictions. --Percy
|
|||||||||||||||||||||||
PaulK Member Posts: 17825 Joined: Member Rating: 2.2 |
CSI is binary. Either you are over the probability bound or you aren't.
Dembski measures information in terms of "bits" - which is an improbability measure (-log2 p(x) where p(x) is the probability).A probability of 0.5 is 1 bit, 0.25 is 2 bits, 0.125 is 3 bits etc. Really, the basic method is not at all original. Show that all the alternative explanations are too improbable to be worth considering and accept the one you have left. (But there are huge problems in the details - for instance if there are two possible pathways to a result, when do you consider them to be different explanations and when do you lump them together as one ?) I do have The Design Inference which explains the method, but I have to say that it is very badly written. If I had less of a mathematical background I'm not sure I could have worked it out correctly (it's unclear enough that an idea of what it SHOULD say is very helpful !). It also has the justification of his "Universal Probability Bound", too although I find the argument to be less than entirely convincing. The bound is low enough, though, that I can't call it a fatal problem (but I doubt whether it can be considered a significant contribution either). I found the book remaindered at a major bookstore in town, which is the only reason I bought it. And it's only worth it so that I can counter arguments based on Dembski's claims.
|
|||||||||||||||||||||||
traderdrew Member (Idle past 5176 days) Posts: 379 From: Palm Beach, Florida Joined: |
Behe would know this if he ever bothered to do a survey of the literature before making his proclamations that such things have "never been studied," but he doesn't. I see a lot of huffing and puffing from you about Behe but no real evidence so far. Believe it or not, I have read parts of books on evolution at my local B&Ns and Borders. Authors have included Jerry Coyne, Kenneth Miller and Richard Dawkins. Jerry Coyne seems to be the most rational of that bunch. I have not come across anything that convinces me to drop ID after I read the counterarguments from Behe or others. An example would be the mounting amount of evidence that seems to strongly suggest that the TTSS devolved from the flagellum. The reason why I was skeptical that your statement of whoever it was who threw down the publications or journals that refute Behe's irreducible complex arguments, is that is a friggin courtroom and I don't know who you are. I don't know much time everyone had to examine the evidence in court or how much evidence was provided by both sides. It would greatly strengthen your argument if you could provide me a link of something that I can examine. I really have some studying up to do. So I would rather see some evidence rather than participate in a dragged on useless debate.
|
|||||||||||||||||||||||
Wounded King Member Posts: 4149 From: Cincinnati, Ohio, USA Joined: |
Some antibiotics work by inducing DNA damage in bacteria. Cause enough damage and the bacteria dies. But antibiotics can also somehow stimulate the RecA protein to cleave the LexA repressor, even though the bacteria is not replicating. It isn't the replication of the bacteria necessarily that is the important issue but of the DNA. The way fluoroquinolones work is to block the action of a set of enzymes which allow changes to the coiling of the circular DNA found in bacteria (these enzymes are called gyrases as they cut the DNA and allow it to gyrate). The SOS response is not specific to replication but is started by the binding of the RecA protein to single stranded DNA, which catalyses LexA auto-cleavage. Single stranded DNA normally only occurs either during replication, usually when something is blocking the normal progress of replication, or when the DNA is significantly damaged. One of the effects the cleavage of LexA has is that the increased levels of the error prone polymerases allows them to act on undamaged DNA which does not happen during the normal course of things.
The possibility of all mutations become more likely, including those with resistance-conferring ability. This isn't necessarily the case, there may be structural elements either sequence or higher level in certain regions of DNA that make mutations more or less probable or the frequency of different types of mutation may change. So the higher induced mutation rate need not be simply an increased level of the mutation rate seen in the absence of the SOS response, it may be qualitatively different as well. In terms of LexA and E. coli I don't know that there is any evidence for this. RecA recruits error prone polymerases to sites where single stranded DNA is available, but I don't see that this would favour specific mutations in genes allowing the evolution of resistance, unless they were more susceptible to damage than other regions. One interesting possibility is that since higher levels of transcriptional activation are associated with higher levels of mutation if gyrase encoding genes were induced downstream of the SOS response we would have a mechanism which would preferentially target a subset of genes including the gyrases. Sadly, looking at literature on the targets upregulated when LexA is cleaved I see no sign of the gyrases (Khanin et al., 2006). Intriguingly however the transcription of gyrases is upregulated when gyrase activity is blocked (Menzel and Gellert, 1983; Franco and Drlica, 1989). It is also interesting to note that the different subunits A and B of the gyrase enzyme have different transcriptional responses to change in supercoiling with A being strongly upregulted and B having no noticeable response (Neumann and Quiones, 1997). This could be significant given that the largest proportion of resistant mutants that have been isolated for Ciprofloxacin have been in the gene coding for the GyrA subunit(Morgan-Linnel et al., 2008). TTFN, WK
|
|||||||||||||||||||||||
Wounded King Member Posts: 4149 From: Cincinnati, Ohio, USA Joined: |
Wikipedia has an extensive page covering the Dover case. This includes the judges final decision and transcripts of the majority of the proceedings.
There is quite a lot of Behe testimony, I think the part that is being focused on is during the afternoon of day 12. TTFN, WK Edited by Wounded King, : No reason given.
|
|||||||||||||||||||||||
Smooth Operator Member (Idle past 5136 days) Posts: 630 Joined: |
quote:Yes, something like that. You will get a kind of oscillator that the computer optimizes for you.
|
|
|
Do Nothing Button
Copyright 2001-2023 by EvC Forum, All Rights Reserved
Version 4.2
Innovative software from Qwixotic © 2024