Register | Sign In


Understanding through Discussion


EvC Forum active members: 65 (9164 total)
6 online now:
Newest Member: ChatGPT
Post Volume: Total: 916,913 Year: 4,170/9,624 Month: 1,041/974 Week: 368/286 Day: 11/13 Hour: 1/1


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   What is an ID proponent's basis of comparison? (edited)
Smooth Operator
Member (Idle past 5144 days)
Posts: 630
Joined: 07-24-2009


Message 101 of 315 (516637)
07-26-2009 3:58 PM
Reply to: Message 97 by Stagamancer
07-26-2009 2:15 PM


quote:
OK. This is obviously a really difficult concept for you. If you induce mutations, what that means is you increase the chance that a mistake is made in copying the code. However, the kind of mistake that is made is still random. The outcome of this mistake is random. This is still random mutation, there's just a higher probability that it will occur when it is induced.
It's not random if it was induced, by definition. Random mutations have no cause.
quote:
Inducing mutation is just like rolling multiple dice instead of just one (or rolling one die more often). It increases the chances of getting the desired roll, but it does not decrease the randomness of each individual roll. Just because there are mechanisms that allow for this increased mutation rate doesn't mean it's not random.
Well no, I never said that the mechanism is mutating for an exact goal. I said it is using mutations to get over time the desired goal. But the point is that mutations are not happening out of thin air.
quote:
Ah, they can't get resistance within the scope of the experiments, not no matter how long it takes. A key distinction.
Obviously long enough to conclude they can't get it.
quote:
Yes, bacteria have ways of increasing the rate of mutation. But, you keep saying it's not random, and yet you also say it's not directed to a specific goal. So which is it? Be consistent. Either bacteria have the ability to direct their mutation to a specific goal, or they have the ability to increase the rate of random mutation at a specific site in order to take a chance that they will develop a beneficial mutation. I'll give you a hint, it's the latter.
I am consistent. I said from the start that they mutate specific regions and wait for the positive outcome. They don't actually know what's going to happen.

This message is a reply to:
 Message 97 by Stagamancer, posted 07-26-2009 2:15 PM Stagamancer has replied

Replies to this message:
 Message 106 by DevilsAdvocate, posted 07-26-2009 4:15 PM Smooth Operator has replied
 Message 116 by Stagamancer, posted 07-27-2009 1:10 AM Smooth Operator has replied

Smooth Operator
Member (Idle past 5144 days)
Posts: 630
Joined: 07-24-2009


Message 102 of 315 (516638)
07-26-2009 3:59 PM


quote:
Smooth Operator,
You are doing very well in this debate, much better than I would have done. You seem to be losing some of your patience but I don't blame you.
Oh, trust me, I'm used to people just not geting it.

Smooth Operator
Member (Idle past 5144 days)
Posts: 630
Joined: 07-24-2009


Message 105 of 315 (516644)
07-26-2009 4:07 PM
Reply to: Message 99 by Percy
07-26-2009 2:44 PM


quote:
Oh. Then why do you care about evolution of the nylon-feeding trait if the mere existence of DNA infers design?
Because evolutionists keep claiming you can get information by evolutionary algorithms. Which is false.
quote:
About preventing mutations, ask yourself how you would prevent copying errors? Reproduction is very complex chemistry, and while highly reliable, it isn't perfect. All known cells have non-zero mutation rates. Even when the DNA repair mechanism is active, it, too, is just chemical reactions and imperfect. When it is active the mutation rate is smaller, but it never reaches zero.
The average mutation rate for normal bacteria (meaning their mutation repair mechanism is active) is 10-8 per base pair per generation. When the bacteria disables the repair mechanism in response to the presence of an antibiotic (disabling this response is what that research paper is about, oi/10.1371/journal.pbio.0030176]-->Inhibition of Mutation and Combating the Evolution of Antibiotic Resistance), the mutation rate goes up. The mutation rate was never zero. Evolution is never "halted in its tracks." That could only happen if you prevented all mutations, and since the copying of millions of nucleotides is only rarely perfect, almost all reproductive events are accompanied by mutations.
No, when the mechanism was disabled, than there was no ability to evolve, not when it was active. You are confusing mutation inducing, and mutation repair mechanisms.
quote:
Yes, you're right, they do say that. For some reason they're overstating the case. Please understand that they don't really mean that for the reasons explained before. There's no way to make reproduction perfect.
It's not perfect, but it doesn't have to fail in the way you think it does. If the bacteria can't evolve resistance, than that's what we have.
quote:
In some ways it's a little like typing a message here. If you type your message and then immediately click "Submit Reply," there will be a number of typos (more for some than others). Those that proof read their messages before posting appear to have far fewer typos, and this is analogous to the mutation correction mechanisms in cells. This doesn't mean he never has typos, he just has far fewer typos. Prevent him from proof reading, analogous to the bacteria disabling the mutation correction mechanism, and the number of typos per message will suddenly jump.
This could be true in higher organism. Or yes, maybe even in bacteria. But it could be that all other mutations in bacteria are also induced. And the only reason why we called mutations mistakes, was because of ignorance.
Edited by Admin, : Fix quoting.

This message is a reply to:
 Message 99 by Percy, posted 07-26-2009 2:44 PM Percy has replied

Replies to this message:
 Message 114 by Percy, posted 07-26-2009 7:04 PM Smooth Operator has replied

Smooth Operator
Member (Idle past 5144 days)
Posts: 630
Joined: 07-24-2009


Message 109 of 315 (516651)
07-26-2009 4:36 PM
Reply to: Message 103 by DevilsAdvocate
07-26-2009 4:01 PM


quote:
Not the way Demski uses it. Orgel never used the entire phrase "complex specified information" and his usage of this concept is completely contradictory to how Demski uses it. This is another case where creationists hijack the academic work of real scientists. Anyways...
No, Dembski improved upon it. But the fact remains that Orgel invented it.
quote:
Says who?
Says who?
Any ID theorist you ask, and anyone with a bit of reason.
quote:
I assume you got this second hand from Demski referring to Seth Lloyd's 'Computational capacity of the universe' article in which Llyod is allegorically representing the physical universe as a computation device aka a computer and trying to determine the information capacity & computation power the universe entails. In other words this is a metaphore. He even states this:
Yes, and do you see a problem with this? It is obvious that it's a metaphor. But the point is that that is the maximum number a chance process in the whole unverse can accomplish.
quote:
How can you determine that the product is not a result of a natural law? This is to assume that something exists outside of natural law (i.e. supernatural); which by definition you cannot determine using empirical evidence which is itself based on natural laws and the scientific method. You are in a catch-22 situation here with your assertion.
Nope. By natural law, I mean something like crystallization. It's a natural law which explains how crystals come about. They were not designed.
quote:
Or so you thinkanyways
Well if you have problems with it, let me know.
quote:
What 400 bits in nature? What are you talking about?
Anything that needs more than 400 bits to describe that you find in nature. Like human DNA.
quote:
The reason we can determine that it was designed by humans is because we as human beings are indoctrinated on what human-made objects look like both by through observation of other objects designed by humans and through our own trial and error.
Exactly, that's called experience. That's what you need for doing observational science. So if you said that Mount Rushmore was not designed, you would be wrong. But your experience says it was, and you would be right.
quote:
Besides, it is still subject to the same natural laws for its creation as does other phenomena not created/designed by man.
No it's not. Crystals arise in right conditions because of their strucutre and natural law. A novel does no arise from ink and paper because of it's structure and nautral law. An intelligence has to create the novel.
quote:
There is nothing magical about this.
Neither did I say it's magic. Intelligence in completely natural.
quote:
You see design from a supernatural entity because that is what you want to see you.
Well again I never said it was supernatural, even if it was, that is not what I want to see, but what the design detection method tells us. So if you saw a car and you concluded it was designed? Does that make you see what you want to see, or is it objective reality that it was designed?
quote:
How about termite mounds are they intelligently designed by termites?
Yes, they are, but by a very low level of intelligence. We can't even measure it.
quote:
How about stromatolites, are they intelligently designed by cynobacteria?
Bacteria are not intelligent. They are not thinking about the process they are doing, and have no intelligence. They are like machines programmed to do their work.
quote:
How about natural phenomena formed through the wind and water erosion and seismic activity that earily look like human-made objects, this is called :
Those are not designed. Natural law accounts for them.
quote:
You are conditioned to see what you want to see. It is called apophenia. Look it up.
So if I saw a computer, even if I didn't know what a computer was before, and concluded that it was designed, would that mean that I saw what I wanted to see? Should I have concluded that it was not designed?
quote:
LOL, is that the best you can come back with? What are you 10 years old?
That's not the only thing I wrote. Not my fault if you choose to respond to only selected parts of my text.
quote:
You are no extrapolating you are making shit up out of thin air to back up your preconceived notions.
You are the one that's making shit up.
quote:
No, that is not what this article is saying. Now you are deliberately lying. The evolution of this bacteria in this article is only taking into consideration its adaption to resist certain antibiotics not the entire evolutionary history of the bacteria.
Well you're a sick and twisted man and I can't help you. If you are stupid enough not to be able to extrapolate, that if the bacteria has the genes required to mutate turned off, and the specific part of its genome can't evolve because of it. And you can't conclude that this is what is happening throughout all the genome, than what can I say.
quote:
It does evolve without LexA. The presence of LeXA just enables mutation in a certain area of the bacterium’s genome which enables it to be resistant to certain types of antibiotics. Stop making shit up.
You stupid imbecile! The resistance does not evolve without LexA, because there are no mutations on that specific part of the genome! So if you turned off all LexA-type mechanisms nothing would be able to evolve in the bacteria.
quote:
No, this is not how adaption/evolution works. How are you defining random mutations. All these protein inhibitors are doing is increasing the mutations in one specific area of the genome. This does not mean that mutations are not occurring elsewhere in the genome.
I know that shit for brains. But as I said, if totally random mutations existed than the part of the genome that gets induced by LexA would still evolve without it. Because random mutations would mutate it. But they don't! So the only conclusion is that other parts of the genome are also not mutating due to random mutations but to some other LexA-type mechanisms.
quote:
Who is saying that these bacterium need these ‘mechanisms’ to evolve overall? These mechanisms are only needed to evolve resistance to certain types of chemical agents. You really need to study some basic biology and molecular biology before you try to attack scientific concepts you are ignorant of.
Andf you need to remove all the crap from your brain because you have proven yourself not to be able to extrapolate.
quote:
How do you know I misunderstand IC when we have not even discussed it. This statement went over your head so nevermind.
Because you have proven yourself to be retarded.
quote:
Sweeping generalization and incorrect one at that. Did this article state that all bacteria require this mechanism to be resistant to all chemical agents much less all antibiotics.
I'm asking you about this specific one.
quote:
We don’t know uncategorically and 100% that they cannot evolve resistance without LexA.
Science is not about knowing 100%. It's about current knowledge. And current knowledge tells us it can't.
quote:
It is assumed based on these studies that they can’t at this time. However, given enough time, the bacteria very well may evolve to build resistances without LeXa. If enough mutations occur who knows this may occur, no one knows. Again why would not having this mechanism inhibit the bacterium from evolving in the past since many of these synthetic antibiotics were non-existent until the last half of this century anyways.
Given enough time either nothing will happen or they will decay because of genetic entropy.

This message is a reply to:
 Message 103 by DevilsAdvocate, posted 07-26-2009 4:01 PM DevilsAdvocate has not replied

Smooth Operator
Member (Idle past 5144 days)
Posts: 630
Joined: 07-24-2009


Message 110 of 315 (516654)
07-26-2009 4:40 PM
Reply to: Message 106 by DevilsAdvocate
07-26-2009 4:15 PM


quote:
All genetic mutations are induced. Some are induced by sources outside the organism and some from within.
Random is not synonomous with having no cause. Random means that the mutation can occur anywhere in a given area of the genome and do not form a predictable pattern. There are many causes for random mutations in the genetic code including inaccurate copying of DNA sequences, biochemical agents (virus, prions, etc), radioactive agents (i.e. UV light)
Well obviously the have a couse. They follow the laws of nature. But by random it means they have no cause inside the organism. If they have a cause inside the organism, than the are not random.
quote:
There are a multitude of causes for random genetic mutations.
Who would have thought?
quote:
Never say never in science. Science is always spoken in the language or probabilities.
That means you should accept that bacteria can't evolve resistance without LexA for now. Untill someone shows it can.

This message is a reply to:
 Message 106 by DevilsAdvocate, posted 07-26-2009 4:15 PM DevilsAdvocate has replied

Replies to this message:
 Message 112 by DevilsAdvocate, posted 07-26-2009 5:08 PM Smooth Operator has replied
 Message 113 by Blue Jay, posted 07-26-2009 5:48 PM Smooth Operator has replied

Smooth Operator
Member (Idle past 5144 days)
Posts: 630
Joined: 07-24-2009


Message 127 of 315 (516793)
07-27-2009 2:18 PM
Reply to: Message 112 by DevilsAdvocate
07-26-2009 5:08 PM


quote:
You can't even remember what you said in previous posts. You really are dumbing down your cause.
You filthy scum, I meant a cause inside the organism.
quote:
No that is not what the random in random mutations means. Do you understand what the word 'random' means?
Random means it cannot be accurately predicted meaning we cannot accurately predict where genetic mutations will strike next in an organisms genome. Some mutational sources are more random than others. For example mutations cause by UV sources are almost completely random since nearly all portions of the genome are sucesptable to this radiation source and it would be nearly impossible to determine where exactly these point mutations could occur. Whereas, the areas of DNA which are suseptible to viral agents of mutation may be more predictable in there location of occurance.
We can't predict them 100% but we can predict them with probability. Since some mechanisms are know to induce mutations, than it's obvious that we can predict on which regions of the genome they are more likely to appear.
quote:
You have yet to show how this helps your case as I debunked your idea that these genetic mechanisms had to develop for any evolutionary changes to occur.
You debunked me? Where?
Even if there were still random mutations present for that specific part of the genome, it shows that without LexA they are so insignificant that the can't get you resistance in a meaningful amount of time.

This message is a reply to:
 Message 112 by DevilsAdvocate, posted 07-26-2009 5:08 PM DevilsAdvocate has not replied

Smooth Operator
Member (Idle past 5144 days)
Posts: 630
Joined: 07-24-2009


Message 128 of 315 (516796)
07-27-2009 2:20 PM
Reply to: Message 113 by Blue Jay
07-26-2009 5:48 PM


quote:
Randomness has absolutely nothing to do with causation. The concept you are thinking of is described by the term "spontaneous." Mutations are not spontaneous: they are random.
Randomness only deals with incidence and uncertainty, not causation. It implies that there are multiple options, no one of which is guaranteed to come to fruition, but any one of which could potentially happen.
This is a perfect description for bacterial mutations, with or without the inhibitor. Turning on the inhibitor increases the likelihood that a mutation will slip past the repair machinery, but it does not cause mutations to happen. The bacterium is still reliant on the usual causes of mutation to make mutations happen.
If the inhibitor is on, mutations are caused by DNA replication errors, chemical imbalances, radiation, etc.; if the inhibitor is off, mutations are caused by DNA replication errors, chemical imbalances, radiation, etc.
What you are calling "induction" is not causation, it is facilitation. This means that anything that happens under induction could happen without induction, but it would just happen more slowly.
You are confusing mechanism that let mutations through by shuting down repair systems, with mechanisms that induce mutations. And when they are shut down, the specific region of the genome can not evolve.

This message is a reply to:
 Message 113 by Blue Jay, posted 07-26-2009 5:48 PM Blue Jay has not replied

Smooth Operator
Member (Idle past 5144 days)
Posts: 630
Joined: 07-24-2009


Message 129 of 315 (516798)
07-27-2009 2:25 PM
Reply to: Message 114 by Percy
07-26-2009 7:04 PM


quote:
Except it's true. If it were false then models of the evolutionary algorithm couldn't produce new information. If it were false then you would be able to identify the location in the cell of the information for where to place the mutation and which base pairs to substitute.
I understand that you believe it is false. But believing and proving are two different things.
No, I already explained why it's false. And I gave links that describe the NFL theorem that explains why algorithms do no produce new information. Why didn't you read them?
quote:
The evidence we have says that the mutation rate goes up when the mutation repair mechanism is disabled. If you think a mutation inducing mechanism exists then you have to provide evidence of one.
I did, few posts ago. It's caleld LexA. When it is turned off no evolution is possible on the specific part of the genome.
quote:
Interesting idea that all mutations are deterministically induced, but there's no evidence for it.
Maybe they are, maybe not. The point is, LexA is evidence from which we can extrapolate to think that it could be true.

This message is a reply to:
 Message 114 by Percy, posted 07-26-2009 7:04 PM Percy has replied

Replies to this message:
 Message 132 by Percy, posted 07-27-2009 3:00 PM Smooth Operator has replied

Smooth Operator
Member (Idle past 5144 days)
Posts: 630
Joined: 07-24-2009


Message 130 of 315 (516801)
07-27-2009 2:31 PM
Reply to: Message 116 by Stagamancer
07-27-2009 1:10 AM


quote:
By random I mean the frame shift/base pair change or whatever IS random. If I induce a die to roll is the outcome no longer random?!
Teh outcome is random, but the induction is not.
quote:
Contradiction, anyone???
It's not a contradiction. I said that the mechanism has no specific goal in mind. When the bacteria acquires resistance the mechanism stops. But the mechanism doesn't know the exact sequence it needs. That's why it takes time. If it did know, it would change the DNA sequence in an instant.
quote:
Yes, they wait for it to randomly occur! While mutating randomly (like a die rolling) there is a probability that a mutation will arise that gives the bacterium an advantage (like getting the desired number up on the die). Even if this increased mutation rate is induced, each time there is a mutation, it's a random mutation. Most of these mutation will be neutral or harmful, but with enough generations (rolls of the die) the right one will come up, and the bacteria that have that mutation will out-compete the others. It's really quite simple, and you seem to have a hard time grasping what random really means in this context. I've tried to explain it to you many times. This is my last. I'll respond to another argument that you have, but I'm done with this random mutation part. I don't see how it could be any clearer.
The point is that the act of induction of mutations, their occurance itslef is not random. They are not happening randomly. The sequence you get when you mutate a specific part of the genome is random, but the act of mutating it is not random.

This message is a reply to:
 Message 116 by Stagamancer, posted 07-27-2009 1:10 AM Stagamancer has seen this message but not replied

Smooth Operator
Member (Idle past 5144 days)
Posts: 630
Joined: 07-24-2009


Message 131 of 315 (516803)
07-27-2009 2:34 PM
Reply to: Message 118 by Peepul
07-27-2009 8:07 AM


quote:
It seems that an assertion you're making, Smooth, is that evolutionary algorithms can't produce new complex specified information, because they are programmed in advance by humans, even their 'random' components...
Exactly.
quote:
My question: Does this limitation on evolutionary algorithms, in your view, apply to algorithms more generally? i.e. can any algorithms produce new complex specified information? If they can, which ones can and which ones can't? How do we tell the two kinds apart?
If no algorithms can generate CSI, then it would imply that 'complex specified information' is in technical terms non-computable. This would have interesting implications.
It is computable, the algorithm just can't generate it. They can only process it. It applies to all algorithms. It has been shown to be true. I have already posted a link here about the NFL theorem that says that algorithms do not produce new information. It really gets on my nerves to have to do it again and again.

This message is a reply to:
 Message 118 by Peepul, posted 07-27-2009 8:07 AM Peepul has not replied

Replies to this message:
 Message 136 by Richard Townsend, posted 07-27-2009 3:55 PM Smooth Operator has replied

Smooth Operator
Member (Idle past 5144 days)
Posts: 630
Joined: 07-24-2009


Message 133 of 315 (516815)
07-27-2009 3:23 PM
Reply to: Message 132 by Percy
07-27-2009 3:00 PM


quote:
So what you need to do is explain how NFL theorems disallow the possibility of genetic algorithms producing new information.
Read my next post, I will explain it there.
quote:
But I can save you some time. Genetic algorithms that model evolutionary behavior are already in use today producing novel designs. They're a practical reality. There really isn't much point to arguing that something that works before our very eyes doesn't really happen. And then how do you explain the design innovations produced by the algorithm since they're not figments.
I'm sorry but no. Are you talking about that NASA antenna they build with algorithms? Yes, I know about that. The problem is, that the algorithm itslef already had all the information to produce the design. Just like a computer has all the information it needs to produce anything you want it to. But its very long and a boring job, so scientists let to computer calculate whatever they put in it.
And after the calculation is done, the optimized design is produced by the computer. But the point is that computer had all teh information it had to select the best design. It produced no new information. It only selected the best one.
quote:
The more relevant point is that new information is being created all the time throughout the universe, including through the process of mutation. If you think that the information identifying where mutations should occur and which base pairs should be involved is already part of the cell, then you have only to find the source of this information, and you have to find it for all possible mutations. Since mutations are known to occur anywhere throughout a genome, and since bacterial genomes range from around 500,000 base pairs up to 10 million base pairs, you need information somewhere in the bacterial cell for the production of each and every possible mutation, as well as the molecular triggers for each one.
The information is already in the genome. The mechanisms that the cell has helps the cell adapt. It selects the best possible expression of already existing information.
quote:
It has already been explained that this isn't true. What LexA does is control whether the genetic repair mechanism is enabled or not. When it is enabled then there are still mutations, just fewer of them. The average bacterial mutation rate of 10-8 is when the repair mechanism is enabled. Note that it isn't 0.
Wrong. I said it three times already. You are confusing the mutation repair and mutation inducing mechanisms. When LexA is turend ON there are mutations. When its turned OFF, there are no mutations.
quote:
The scientists also show that E. coli evolution could be halted in its tracks by subjecting the bacteria to compounds that block LexA. Interfering with this protein renders the bacteria unable to evolve resistance to the common antibiotics ciprofloxacin and rifampicin.
It' can't evolve while LexA is BLOCKED. In other words, it's turned OFF, not ON.
To Stop Evolution: New Way Of Fighting Antibiotic Resistance Demonstrated By Scripps Scientists – Uncommon Descent
Edited by Smooth Operator, : No reason given.

This message is a reply to:
 Message 132 by Percy, posted 07-27-2009 3:00 PM Percy has replied

Replies to this message:
 Message 151 by Percy, posted 07-27-2009 9:02 PM Smooth Operator has replied
 Message 155 by Rrhain, posted 07-28-2009 4:48 AM Smooth Operator has replied

Smooth Operator
Member (Idle past 5144 days)
Posts: 630
Joined: 07-24-2009


Message 134 of 315 (516819)
07-27-2009 3:43 PM


NFL Theorems... I will explain them here once and for all.
The NFL theorems say that no algorithm can otperform any other on the average unless it takes advantage of prior information about the target.
Meaning, that if you are in a house and you are searching for the keys. You search them for some time and than you find them. After that you write down the exact path you went through to find the keys.
The question is: If somebody asks you to find their keys in his own house, to whih you know nothing about, and you try to find the keys, will your path you used last time, find the keys faster than the random search?
The answer is NO it won't. Becasue if you know nothing about this new house, if you don't know the size of the house, the number of rooms, or where the keys certainly are not. You are not going to find them faster than the last time, or faster than the random search.
To find them faster, you have to know at least something. The size of the house, the number of rooms, or where they keys certainly are not located. But this is the main point. Any of those informations you can't get unitll you started searching. So if someone does tell you in advance, they gave you PRIOR INFORMATION about the search problem!
This basicly means that even the evolutionary algorithms, which have no knowledge about what they are looking for in advance, will not be any better than a random chance. And since random chance doesn't create new information, neither does an evolutionary algorithm.
Some quotes:
[quote]"... no operation performed by a computer can create new information."
The [computing] machine does not create any new information, but it performs a very valuable transformation of known information.
A "learner... that achieves at least mildly than better-than-chance performance, on average, ... is like a perpetual motion machine - conservation of generalization performance precludes it.
Unless you can make prior assumptions about the ... [problems] you are working on, then no search strategy, no matter how sophisticated, can be expected to perform better than any other
"The inability of any evolutionary search procedure to perform better than average indicate[s] the importance of incorporating problem-specific knowledge into the behavior of the [search] algorithm.[/quote]
The Evolutionary Informatics Lab - EvoInfo.org

Replies to this message:
 Message 135 by Perdition, posted 07-27-2009 3:50 PM Smooth Operator has replied
 Message 141 by Richard Townsend, posted 07-27-2009 4:25 PM Smooth Operator has replied
 Message 152 by Percy, posted 07-27-2009 9:29 PM Smooth Operator has replied
 Message 158 by Perdition, posted 07-28-2009 11:04 AM Smooth Operator has replied

Smooth Operator
Member (Idle past 5144 days)
Posts: 630
Joined: 07-24-2009


Message 137 of 315 (516825)
07-27-2009 4:02 PM
Reply to: Message 135 by Perdition
07-27-2009 3:50 PM


quote:
Again, this is an assertion.
Nobody ever observed it, and there is no such capacity. So it's a fact.
quote:
In your example, the random search of your house turns up other information you can use to build prior information for your next search in someone else's house.
NO YOU CAN'T! That's the point! Did the other guy lose the keys in the EXACT same place in his house as you did in yours? Are your houses identical? No, ofcourse not!
quote:
For example, you notice that nothing is sitting on the ceiling, that reduces the number of places you need to search next time.
Does that count for other houses with lost keys? No, obviously it doesn't.
quote:
You can also say, well, since I found them on the bedside table last time, rather than going through all my previous steps, I'll start there next time.
And will they be there 100%? No, they won't!
quote:
Even in a new house, that's a good place to start as it assumes people are generally the same.
That's called prior information.
quote:
All of this is without using prior nowledge the first time, and using the new information the second time.
Which doesn't help you in other case at all.
quote:
If the situation is similar, the exact conditions don't matter, the process can still work faster than random.
Well that's the point! IF IT IS SIMILAR! But what if it's not!? Than you will fail! And that means no algorithm works better than any other, or a random search, without prior information.

This message is a reply to:
 Message 135 by Perdition, posted 07-27-2009 3:50 PM Perdition has replied

Replies to this message:
 Message 140 by Perdition, posted 07-27-2009 4:12 PM Smooth Operator has replied

Smooth Operator
Member (Idle past 5144 days)
Posts: 630
Joined: 07-24-2009


Message 138 of 315 (516827)
07-27-2009 4:09 PM
Reply to: Message 136 by Richard Townsend
07-27-2009 3:55 PM


quote:
If no algorithm can generate it, then it's non-computable.
No algorithm can generate anything without prior information.
quote:
That's the definition of non-computable. But the main problem is, your claim creates problems for the very existence of CSI.
No you misunderstood what I was sying. I was saying that it is computable, but you won't wind up with more information than you inputed at the start. No algorithm can do that.
quote:
We don't know whether humans run algorithms in their brains (most AI researchers believe so but some thinkers, such as Roger Penrose disagree).
I disagree also. Since we can think. Our mind is not material like a computer.
quote:
This means there is NOTHING we can definitely point to as CSI. Nothing created by humans can be called CSI. Nothing created by any 'intelligent designer' can be called CSI - unless you can show they did it non-algorithmically.
Yes it can since it's more than 400 bits. The whole of observable universe could not have created more than 400 bits since it's origin. If we were just a part of this material universe, with no non-material mind, we wouldn't be able to produce more than 500 bits. But we are!
quote:
Tracking back, I believe the mistake in your reasoning is when you say that algorithms can't create CSI. They can.
No, they can't. Please provide where does it say they can.
quote:
The NFL theorems, curiously, do not say this, no matter how many times you claim that they do. Here's a quote from the Wolpert paper.
That is but one of houndreds of lines in his paper. Which just proves my point that one algorithm will work well on one landscape, and not so good on another on average, given that it has prior knowledge about the problem.

This message is a reply to:
 Message 136 by Richard Townsend, posted 07-27-2009 3:55 PM Richard Townsend has replied

Replies to this message:
 Message 142 by Richard Townsend, posted 07-27-2009 4:31 PM Smooth Operator has not replied

Smooth Operator
Member (Idle past 5144 days)
Posts: 630
Joined: 07-24-2009


Message 143 of 315 (516845)
07-27-2009 4:53 PM
Reply to: Message 140 by Perdition
07-27-2009 4:12 PM


quote:
How do you know? Is it possible, that if you lost your keys on your nightside table, and a lot of other people put their keys on the nightside table, that maybe this other person put their keys on their nightside table?
It's possible but it isn't probable! And that's what we are talking about probabilities.
quote:
Even if they didn't, the knowledge you gained from your first search, can help you in your second.
For which house? Some other unknown house? No it can't.
quote:
ie, no keys on the ceiling, no keys in places to small for keys to fit. If you're truly operating from no prior knowledge in the first case, you would have to consider those possibilities the first time, but could rule them out the second.
Again, that information is gained by the search. So for the second search you already have some prior information. But if you used that method the first time on the second house you would not do any better.
quote:
It does if you assume the conditions are similar, and until you find they aren't, this is a good assumption to make.
Well that's an assumption that's not always going to work for you.
quote:
They don't need to be there 100%, they just need to be there more often than not.
Yes, they do, because than your algorithm is not better than some other in all cases.
quote:
No, it's not information, it's an assumption. I generally assume things are similar to previous experiences until I am shown a place where they differ.
Yes it is. If you modify the second search with some information and than do the search, it's called prior information.
quote:
It obviously does.
Nope.
quote:
You're assuming its different. Why?
Are you honestly telling me that ALL houses in the world are identical!?
quote:
If it's similar, it will help, if it's not, it will generate new information for the next time.
But if it isn't similr it won't help, that's the point.
quote:
In fact, this is how all information we have is generated, by taking one experience and applying it to the next. The first experience is almost always random (just watch a kid) and patterns emerge out of it as the kid learns.
Yes, becasue he extracts knowledge from his trial. But if you give him a totally unrelated problem, his method won't help him at all.

This message is a reply to:
 Message 140 by Perdition, posted 07-27-2009 4:12 PM Perdition has replied

Replies to this message:
 Message 145 by Perdition, posted 07-27-2009 5:13 PM Smooth Operator has replied

Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024