|
Register | Sign In |
|
QuickSearch
Thread ▼ Details |
|
Thread Info
|
|
|
Author | Topic: What is an ID proponent's basis of comparison? (edited) | |||||||||||||||||||||||
Richard Townsend Member (Idle past 4763 days) Posts: 103 From: London, England Joined: |
It is computable, the algorithm just can't generate it. They can only process it. If no algorithm can generate it, then it's non-computable. That's the definition of non-computable. But the main problem is, your claim creates problems for the very existence of CSI. We don't know whether humans run algorithms in their brains (most AI researchers believe so but some thinkers, such as Roger Penrose disagree). This means there is NOTHING we can definitely point to as CSI. Nothing created by humans can be called CSI. Nothing created by any 'intelligent designer' can be called CSI - unless you can show they did it non-algorithmically. Tracking back, I believe the mistake in your reasoning is when you say that algorithms can't create CSI. They can.
It applies to all algorithms. It has been shown to be true. I have already posted a link here about the NFL theorem that says that algorithms do not produce new information. It really gets on my nerves to have to do it again and again. The NFL theorems, curiously, do not say this, no matter how many times you claim that they do. Here's a quote from the Wolpert paper.
A number of no free lunch (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class.
|
|||||||||||||||||||||||
Smooth Operator Member (Idle past 5145 days) Posts: 630 Joined: |
quote:Nobody ever observed it, and there is no such capacity. So it's a fact. quote:NO YOU CAN'T! That's the point! Did the other guy lose the keys in the EXACT same place in his house as you did in yours? Are your houses identical? No, ofcourse not! quote:Does that count for other houses with lost keys? No, obviously it doesn't. quote:And will they be there 100%? No, they won't! quote:That's called prior information. quote:Which doesn't help you in other case at all. quote:Well that's the point! IF IT IS SIMILAR! But what if it's not!? Than you will fail! And that means no algorithm works better than any other, or a random search, without prior information.
|
|||||||||||||||||||||||
Smooth Operator Member (Idle past 5145 days) Posts: 630 Joined: |
quote:No algorithm can generate anything without prior information. quote:No you misunderstood what I was sying. I was saying that it is computable, but you won't wind up with more information than you inputed at the start. No algorithm can do that. quote:I disagree also. Since we can think. Our mind is not material like a computer. quote:Yes it can since it's more than 400 bits. The whole of observable universe could not have created more than 400 bits since it's origin. If we were just a part of this material universe, with no non-material mind, we wouldn't be able to produce more than 500 bits. But we are! quote:No, they can't. Please provide where does it say they can. quote:That is but one of houndreds of lines in his paper. Which just proves my point that one algorithm will work well on one landscape, and not so good on another on average, given that it has prior knowledge about the problem.
|
|||||||||||||||||||||||
DevilsAdvocate Member (Idle past 3132 days) Posts: 1548 Joined: |
Traderdrew writes: It seems to me that you should prove to me that CSI is not suitable for detecting design. Argument from ignorance/Negative Proof Fallacy. Why should I provide evidence for something you are trying to prove?? First you need to adequately define CSI...
TJ writes: Or what you can do is prove to us that new amounts of of CSI containing at least 400 bits can be produced by natural causes. I assume by 400 bits you are talking about DNA? Please elaborate and educate the CSI illiterate masses.
TJ writes: Me writes: A natural arch formed by water and wind erosion can have a specific function and use by animals and humans. A cave can as well. Is there an intelligent agent behind the formation of these natural phenomena? There is nothing magical or special about these natural phenomena. We attribute meaning to them precisely because they do seem to conform to our needs and desires. This is in a way a form of anthropocentrism. Yes but it wasn't necessarily designed by an intelligence. It was designed by the forces within chaos. The cave doesn't produce or communicate any CSI. Again define CSI. If I showed you two caves that were identical and one was human made and one created by the forces of nature would this not negate your CSI argument?
TJ writes: The termites build the mound with cooperation. The mound doesn't need to have any particular elucidean shape. Different mounds have different shapes. They don't need to conform to particular mathematical models. Neither does the morphology and physiology of biological life intrinsically 'need' to fit a certain standard. Biological life much like other natural occurring phenomena is shaped by the environmental conditions in which it exists.
TJ writes: Forces such as heavy rain can effect the shapes of the mounds. Forces such as electromagnetic radiation and chemical agents can effect the composition of the genome and ultimately the shapes (morphology) of organisms.
TJ writes: Me writes:
With sentences like these I get the impression that you are trying to make us look bad rather than attempting to investigate what CSI is yourself. You can't even adequately define CSI, how can you expect anyone else to understand WTF you are talking about?? I apologize, I just get frustrated when people throw around terms without defining them or understanding them, themselves. When you deal with people like SO and the like, sometimes we mistakenly throw you under the same proverbial bus. It is a human vice which I fall prone to as well.
TJ writes: Me writes: So what is complex and not complex in nature? I'm not sure if I can draw the lines there. I suspect complexity is represented in natural phenomenon with different degrees of fractal dimension. Agreed but if you cannot make the distinction between chaos and non-chaos are we sure there really is a substantial difference between the two?
TJ writes: You are making me think. That is the whole purpose I post on this board, not just for you but for all of us.
TJ writes: Chaotic things are natural phenomenon that defy traditional linear measurements. Hmm, did you just make up that definition or where did you draw it from. This is unlike any definition of the word 'chaos' I have seen. What do you mean by linear measurements? Can we not predict to a degree the amount of erosion that will occur in a river on a yearly basis? Is that a 'chaotic thing'? So what specifically falls into your category of 'chaotic things'? Edited by DevilsAdvocate, : No reason given. Edited by DevilsAdvocate, : No reason given. For me, it is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring. Dr. Carl Sagan
|
|||||||||||||||||||||||
Perdition Member (Idle past 3269 days) Posts: 1593 From: Wisconsin Joined: |
NO YOU CAN'T! That's the point! Did the other guy lose the keys in the EXACT same place in his house as you did in yours? Are your houses identical? No, ofcourse not! How do you know? Is it possible, that if you lost your keys on your nightside table, and a lot of other people put their keys on the nightside table, that maybe this other person put their keys on their nightside table? Even if they didn't, the knowledge you gained from your first search, can help you in your second. ie, no keys on the ceiling, no keys in places to small for keys to fit. If you're truly operating from no prior knowledge in the first case, you would have to consider those possibilities the first time, but could rule them out the second.
Does that count for other houses with lost keys? No, obviously it doesn't. It does if you assume the conditions are similar, and until you find they aren't, this is a good assumption to make.
And will they be there 100%? No, they won't! They don't need to be there 100%, they just need to be there more often than not.
That's called prior information. No, it's not information, it's an assumption. I generally assume things are similar to previous experiences until I am shown a place where they differ.
Which doesn't help you in other case at all. It obviously does.
Well that's the point! IF IT IS SIMILAR! But what if it's not!? Than you will fail! And that means no algorithm works better than any other, or a random search, without prior information. You're assuming its different. Why? If it's similar, it will help, if it's not, it will generate new information for the next time. In fact, this is how all information we have is generated, by taking one experience and applying it to the next. The first experience is almost always random (just watch a kid) and patterns emerge out of it as the kid learns.
|
|||||||||||||||||||||||
Richard Townsend Member (Idle past 4763 days) Posts: 103 From: London, England Joined: |
This basicly means that even the evolutionary algorithms, which have no knowledge about what they are looking for in advance, will not be any better than a random chance. And since random chance doesn't create new information, neither does an evolutionary algorithm. Thanks for explaining your thinking on this. I think you are misinterpreting the theorems. The theorems apply when considering a search across the space of ALL possible cost functions. They don't rule out more effective algorithms across narrower scopes than this. See this.....
The No Free Lunch theorem has had considerable impact in the field of optimization research. A terse definition of this theorem is that no algorithm can outperform any other algorithm when performance is amortized over all functions. Once that theorem has been proven, the next logical step is to characterize how effective optimization can be under reasonable restrictions. We operationally define a technique for approaching the question of what makes a function searchable in practice. This technique involves defining a scalar field over the space of all functions that enables one to make decisive claims concerning the performance of an associated algorithm. We then demonstrate the effectiveness of this technique by giving such a field and a corresponding algorithm; the algorithm performs better than random search for small values of this field. We then show that this algorithm will be effective over many, perhaps most functions of interest to optimization researchers. We conclude with a discussion about how such regularities are exploited in many popular optimization algorithms."
Christensen and Oppacher (2001) Secondly, you're wrong to say that random search can create no information. The search for your keys, for example, would create information about the location of your keys even if it were purely random. In fact, randomness (as you know) is a key element in many evolutionary algorithms. It's not something we want to get rid of
|
|||||||||||||||||||||||
Richard Townsend Member (Idle past 4763 days) Posts: 103 From: London, England Joined: |
Yes it can since it's more than 400 bits. The whole of observable universe could not have created more than 400 bits since it's origin. If we were just a part of this material universe, with no non-material mind, we wouldn't be able to produce more than 500 bits. But we are! I don't know much about the CSI concept. Does it have this non-material / non algorithmic element built into the definition of it?
|
|||||||||||||||||||||||
Smooth Operator Member (Idle past 5145 days) Posts: 630 Joined: |
quote:It's possible but it isn't probable! And that's what we are talking about probabilities. quote:For which house? Some other unknown house? No it can't. quote:Again, that information is gained by the search. So for the second search you already have some prior information. But if you used that method the first time on the second house you would not do any better. quote:Well that's an assumption that's not always going to work for you. quote:Yes, they do, because than your algorithm is not better than some other in all cases. quote:Yes it is. If you modify the second search with some information and than do the search, it's called prior information. quote:Nope. quote:Are you honestly telling me that ALL houses in the world are identical!? quote:But if it isn't similr it won't help, that's the point. quote:Yes, becasue he extracts knowledge from his trial. But if you give him a totally unrelated problem, his method won't help him at all.
|
|||||||||||||||||||||||
Smooth Operator Member (Idle past 5145 days) Posts: 630 Joined: |
quote:That's obvious. But that means that this algorithm has been optimized for that kind of search. quote:It didn't create information, you had to create it by finding the key. If you actually found you keys on first try every single time randomly by serching, now that would be creating information from nothing. The fact itself that you are searching means you have no information, so you have to create it by searching. If you knew where the keys were, you wouldn't be searching in the first place, right?
quote:No, it doesn't. I explained it a while back.
|
|||||||||||||||||||||||
Perdition Member (Idle past 3269 days) Posts: 1593 From: Wisconsin Joined: |
It's possible but it isn't probable! And that's what we are talking about probabilities. But you know what? Improbable things happen all the time, and the probability often depends on how one looks at it. Until you can provide a mathematical formula for this probability, then apply the formula to something specific, then show why the probability becomes zero (which it must, otherwise you're admitting it is possible for the thing to happen) you have nothing.
For which house? Some other unknown house? No it can't. Yes it can. In fact, it often does. Show me how it can't. If you see that no keys are found on your celing, why would you look on the ceiling in another house? After looking at many houses, and finding that no keys are ever found on the celings in any house, doesn't that make it less probable that keys will be found on the celings of the next house? Doesn't this information ceom from the random first process, refined through subsequent iterations?
Again, that information is gained by the search. Yes. So information is generated through the random search, go on...
So for the second search you already have some prior information. But if you used that method the first time on the second house you would not do any better. Yes, so the first random search generated information you could apply to the second house. How can you say you wouldn't do any better? You can eliminate search options because of the first search, thus making it take less time to exhaust all possibilities in the second.
Well that's an assumption that's not always going to work for you. It doesn't have to always work, it only has to work more often than not. And then when I find a new situation for which it doesn't work, the final solution gets factored into my new "search information."
Yes, they do, because than your algorithm is not better than some other in all cases. Why do you think it has to be better in all cases? It only has to be better in most for it to be a worthwhile algorithm to use. There may be a better way in one instance, and in fact, we can often come up with better ways to design things in nature than the way they turned out because the process isn't perfect. That's my point.
Yes it is. If you modify the second search with some information and than do the search, it's called prior information. Yes, but that prior information was generated by the first random search, and then gets incorporated. Thus, information can arise out of a random process. Once you get information, all you have to do is add to it.
Are you honestly telling me that ALL houses in the world are identical!? No, but they're similar enough for a process created in one to be a benefit in another. All car models are slightly different, but I don't have to learn how to drive each type of car individually. I can learn on one, and apply the knowledge from that to the others.
But if it isn't similr it won't help, that's the point. No, in that case, it won't help in that one instance, but after that one instance, you've learned something more, and expanded the circumstances under which your process will now work. It adapts to a new environment you might say.
Yes, becasue he extracts knowledge from his trial. But if you give him a totally unrelated problem, his method won't help him at all. Right, so he starts at square one again, and starts with nothing, then builds a process for all experiences that are similar to this new one. Given enough time, you'll experience enough different sets of circumstances to have a process in your repertoire to deal with just about any subsequent experiences.
|
|||||||||||||||||||||||
Fallen Member (Idle past 3904 days) Posts: 38 Joined: |
A few questions about the nylon mutation(s), just out of curiousity:
Do we know the exact sequence of mutations that took place? What did the current system evolve from? Has anyone run the changes through the explanatory filter to see if they exhibit specified complexity? In what way could the new system be considered "specified?" (ie, conform to an independently given pattern?) Also, what definition of information is everyone using? Is a sequence of heads and tails information? Beatus vir qui suffert tentationem Quoniqm cum probates fuerit accipient coronam vitae
|
|||||||||||||||||||||||
Richard Townsend Member (Idle past 4763 days) Posts: 103 From: London, England Joined: |
It didn't create information, you had to create it by finding the key. If you actually found you keys on first try every single time randomly by serching, now that would be creating information from nothing. The fact itself that you are searching means you have no information, so you have to create it by searching. Think this through. I'm saying that the search creates the information - clearly it does, because we know something at the end we didn't at the beginning. This meets the Shannon definition of information (decrease in uncertainty of a receiver). The same information is created no matter how we get there. You almost acknowledge that in your paragraph above - see last sentence.
|
|||||||||||||||||||||||
Fallen Member (Idle past 3904 days) Posts: 38 Joined: |
Richard Townsend writes:
So, using your definition of information, flipping a coin 100 times would create information, since it would reduce our uncertainty about the result of those 100 flips? I'm saying that the search creates the information - clearly it does, because we know something at the end we didn't at the beginning. This meets the Shannon definition of information (decrease in uncertainty of a receiver). The same information is created no matter how we get there. Beatus vir qui suffert tentationem Quoniqm cum probates fuerit accipient coronam vitae
|
|||||||||||||||||||||||
Wounded King Member Posts: 4149 From: Cincinnati, Ohio, USA Joined: |
What LexA does is control whether the genetic repair mechanism is enabled or not. When it is enabled then there are still mutations, just fewer of them. Hi Percy, I have to go with Smooth Operator on this one. In its uncleaved form LexA does repress the activity of certain DNA repair mechanisms. When it is cleaved these mechanisms become activated. However, along with DNA repair elements the cleavage also allows the expression/activation of a set of polymerases which are highly error prone, which is probably why Smooth Operator focuses on LexA cleavage as a mutation inducing mechanism instead. You and Devil's advocate are right about the rates of mutation in the LexA mutant and therefore presumably in the presence of a LexA cleavage blocking drug, the authors (Cirz et al., 2005) state that ...
The second step mutation rate was 1.9 ( 0.21) 10−4 mutants/viable cell/d in the control strain and 5.5 ( 4.9) 10−7 mutants/viable cell/d in the lexA(S119A) strain (Figure S3). Assuming that the first and second step mutations are independent, the LexA mutant strain evolves resistance to 650 ng/ml ciprofloxacin in vitro with a rate that is approximately 104-fold lower than the control strain So 104 fold less frequently is pretty substantial and certainly sufficient for the authors to state ...
LexA cleavage-mediated derepression of one or more genes is essential for the efficient evolution of resistance. Of course in the long term evolution can afford to be inefficient but perhaps not in the face of a sudden environmental challenge such as the introduction of antibiotics. TTFN, WK
|
|||||||||||||||||||||||
Percy Member Posts: 22508 From: New Hampshire Joined: Member Rating: 5.4 |
Wounded King writes: I have to go with Smooth Operator on this one. In its uncleaved form LexA does repress the activity of certain DNA repair mechanisms. When it is cleaved these mechanisms become activated. However, along with DNA repair elements the cleavage also allows the expression/activation of a set of polymerases which are highly error prone, which is probably why Smooth Operator focuses on LexA cleavage as a mutation inducing mechanism instead. Whoa! That seemed a little weird, so I've read up on this a bit more, and I think I can make sense out of it if more details are added. Tell me if I've got this straight. Uncleaved LexA represses the SOS response, the name given to a DNA repair system that operates after replication begins. Because uncleaved LexA is present in normal bacteria, the SOS repair response is repressed under most circumstances. It doesn't matter that this repair system is repressed when the bacteria isn't replicating, since there's nothing to repair. Normal bacteria also possess the RecA protein, but it only plays a significant role during replication when (among other things) it cleaves the LexA repressor, thus enabling the SOS repair response, just when it is needed. Some antibiotics work by inducing DNA damage in bacteria. Cause enough damage and the bacteria dies. But antibiotics can also somehow stimulate the RecA protein to cleave the LexA repressor, even though the bacteria is not replicating. The SOS repair response is no longer repressed, and it goes to work repairing the DNA damage caused by the antibiotic. This process of simultaneous destruction and repair produces many mutations. The possibility of all mutations become more likely, including those with resistance-conferring ability. But no matter how close I've come to grasping the details, I don't think it changes the argument I was directing at Smooth Operator, which is what I think you were saying next. I wasn't trying to get to this level of detail because Smooth Operator's argument fails for much more basic reasons. Resistance-conferring mutations are selected from out of the random mutations that occur while under stress from antibiotics and are not specifically induced. --Percy Edited by Percy, : Add minor clarification.
|
|
|
Do Nothing Button
Copyright 2001-2023 by EvC Forum, All Rights Reserved
Version 4.2
Innovative software from Qwixotic © 2024