Understanding through Discussion

Welcome! You are not logged in. [ Login ]
EvC Forum active members: 85 (8915 total)
Current session began: 
Page Loaded: 07-22-2019 5:54 AM
15 online now:
Jon (1 member, 14 visitors)
Chatting now:  Chat room empty
Newest Member: 4petdinos
Upcoming Birthdays: anglagard
Post Volume:
Total: 857,297 Year: 12,333/19,786 Month: 2,114/2,641 Week: 69/554 Day: 6/63 Hour: 0/2

Thread  Details

Email This Thread
Newer Topic | Older Topic
Author Topic:   Can you disprove this secular argument against evolution?
Posts: 3562
Joined: 05-02-2006
Member Rating: 5.3

Message 12 of 293 (803490)
03-31-2017 11:40 AM
Reply to: Message 10 by Coyote
03-31-2017 9:56 AM

Your challenge is to roll 100 dice and get all sixes.
You can either bundle them all up at once and roll them, again and again till about the end of the universe, or:

You can roll them and then select only those not already sixes and roll just those dice again. You'll be done in a few minutes.

Not quite, but then no analogy is 100% correct.

Basically, you made the same mistake as Elliott Sober and Royal Truman (who in turn must have gotten it from Sober): you assume that as soon as a correct roll happens it gets locked in place and not allowed to change. No, they are still subject to change.

Rather, the cumulative selection model is that each generation of attempts is based on the best of the previous generation. Hard to work as a simple roll of the dice. Dawkins' WEASEL chose letters at random, make many copies of that random string, in each one choose a position at random and replace it with a random letter. Out of the population of strings, pick the one that comes closest to the target string and use that to make the next generation of many copies with random changes. Note that in that model, a correct letter is just as likely to be chosen for random change as an incorrect one, despite Sober's and Truman's misunderstanding.

The outcome is that you produce the target string within minutes or even seconds, whereas starting from scratch every time (your "You can either bundle them all up at once and roll them, again and again till about the end of the universe") would indeed take thousands of times longer than the current age of the universe in order to have one change in a million of succeeding.

Nearly three decades ago I could not believe what Dawkins was saying, so I wrote my own version, MONKEY, based directly on Dawkins' description of WEASEL (he did not post his code). It worked so phenomenally well that I still could not believe it, so I analyzed the probabilities involved (published as "Monkey Probabilities" or MPROBS).

The probabilities for success at every stage are low and for failure are high. What turns out to happen is that for complete failure you need for every single attempt to fail. With more parallel paths (eg, greater population sizes), the probability for consistent complete failure becomes vanishingly small, thus rendering eventual success virtually inevitable.

This message is a reply to:
 Message 10 by Coyote, posted 03-31-2017 9:56 AM Coyote has acknowledged this reply

Posts: 3562
Joined: 05-02-2006
Member Rating: 5.3

Message 181 of 293 (804824)
04-13-2017 10:35 AM
Reply to: Message 177 by forexhr
04-13-2017 9:36 AM

In other words, program used a priori knowledge of the goal before the goal has been reached.

Nothing new. Indeed, Dawkins himself discussed that in the original book. Your "objection" is no show stopper. In many genetic algorithm experiments the final goal is not known a priori, but rather there is a functionality that needs to be optimized. The results are the same as in the WEASEL experiment in comparable times.

Compare that to single-step selection, which is the basis of most creationist misrepresentations of the probabilities of evolution working (evolution uses cumulative selection, which is what WEASEL demonstrates). Even though the single-step selection version of WEASEL uses the exact same "a priori" selection test, namely comparing the intermediate results to the target string, single-step selection fails so abysmally that it would thousands or millions of times the accepted age of the universe for it to have any chance of succeeding.

If the fact that a "a priori" test is used, then why doesn't single-step selection, which uses the exactly same "a priori" test, work as well as cumulative selection. It doesn't even begin to come close. Therefore, it's not that the fitness test depends on a priori knowledge that matters, but rather how cumulative selections works versus single-step selection.

IOW, your objections mean nothing.

I didn't believe what Dawkins claimed about WEASEL, so I used Dawkins' description of WEASEL as a specification to write my own program, MONKEY. When it ran phenomenally successfully, I couldn't believe that either, so I analyzed the probabilities involved. You can find that analysis in my document (link is to the HTML'ized version), "Monkey Probabilities" (MPROBS, also linked to through my MONKEY page). After that analysis, I finally understood why it works so well. And, no, knowing the target in advance has absolutely nothing to do with that.

BTW, if you are going to rely on Royal Truman's article and Remine's book, they both grossly misrepresent who WEASEL and MONKEY work. I discuss that on my MONKEY page.

To recap, when a creationist misrepresents, as you have, that evolution uses single-step selection, that is a good indication that that creationist does not know what he is talking about. Or else he's lying (though most often it's that he is ignorant).

This message is a reply to:
 Message 177 by forexhr, posted 04-13-2017 9:36 AM forexhr has responded

Replies to this message:
 Message 184 by forexhr, posted 04-13-2017 3:19 PM dwise1 has responded

Posts: 3562
Joined: 05-02-2006
Member Rating: 5.3

Message 182 of 293 (804825)
04-13-2017 10:40 AM
Reply to: Message 179 by vimesey
04-13-2017 10:30 AM

The thing is, of course, that the program would work just as well with any phrase.

My own example, MONKEY (which has been described as being the closest to what Dawkins described), uses the Roman alphabet in alphabetical order as the default and as the example using in analyzing the probabilities involved (see MPROBS). I also give the user the option of using his own string, though I seem to recall that MONKEY only uses capital letters (that was nearly three decades ago, after all).

This message is a reply to:
 Message 179 by vimesey, posted 04-13-2017 10:30 AM vimesey has not yet responded

Posts: 3562
Joined: 05-02-2006
Member Rating: 5.3

Message 206 of 293 (804960)
04-14-2017 2:06 PM
Reply to: Message 200 by forexhr
04-14-2017 10:15 AM

Looking at the cards in a deck dealt one after another has absolutely nothing to do with probability but with necessity - when the cards are being dealt it is necessary to get some distribution of cards.

Probability on the other hand, is the measure of the likeliness of being dealt specific cards that you specified before dealing.

Yes! Finally! You are finally starting to understand what's wrong with your stupid use of the Texas Sharpshooter Fallacy! You are finally starting to understand what a lie it is to take something that already happened and then prattle on about how it could not have possibly happened because the probability is so low, despite the simple fact that it did indeed happen.

Of course, being a creationist you will deny that you understand what you've done.

In the context of evolution, you need to get specific distribution of nucleotides in the DNA in order to cope with specific environmental conditions.

Yes, but just how do you get that set of specific distributions of nucleotides. Because there's not just one single distribution that would work, but rather many. And you do not start from scratch with each and every individual (ie, single-step selection), but rather each individual inherits the distributions of its parents from its parents (ie, cumulative selection), along with some possible minor changes (ie, mutations -- remember, the only mutations that can have any effect in evolution are genetic mutations which can appear in germ cells and hence would be inheritable).

Here's a pop quiz. Pick a protein, any protein. It's a chain of amino acids, a sequence of amino acids. Calculate the probability of getting that specific sequence of amino acids. Does that prove that that protein couldn't have evolved? Why? Because you believe that the gene for that protein just fell together at random, instead of having been inherited? IOW, that probability calculation says nothing about evolution because your randomness idea is not how evolution works. If you were to calculate probabilities based on how evolution actually works, then you might have something worthwhile to say.

The other problem with such a random-protein-creation claim is that it ignores the simple fact that there are a very large number of possible sequences for any specific protein. Different sequences in different species. You can calculate how closely related different species are by comparing their protein sequences. Comparing cytochrome c we find that the human and rattlesnake proteins differ by 14 amino acids, the human and macaque proteins by one, and the human and chimpanzee proteins are identical.

The relevance here is that it is not just one single specific distribution of nucleotides that would work, but rather a very great many distributions, some working better than others but still working. So when you calculate your probabilities, then you need to take that into account in order to avoid the same stupid mistake of that random-protein-creation claim.

You also need to factor in how those distributions come about. Is it by single-step selection in which it all falls together randomly and either works or doesn't; in the latter case you then just start all over from scratch. Since you have read my MPROBS document, you already know that the probability of single-step succeeding is abysmally small, virtually impossible. And that is also not at all how evolution works.

Rather, evolution uses cumulative selection in which an individual inherits the sequence from its parents along with some possible minor changes (ie, mutations). If that individual does a good enough job of surviving and reproducing, then it passes its own sequence to its own offspring along with some possible minor changes. And so on. Again, we find the probability of cumulative selection succeeding to be virtually inevitable since the probability of it consistently failing is virtually impossible.

Please do yourself a favor and go educate yourself.

Good advice. Why aren't you following it?

This message is a reply to:
 Message 200 by forexhr, posted 04-14-2017 10:15 AM forexhr has responded

Replies to this message:
 Message 214 by forexhr, posted 04-15-2017 9:36 AM dwise1 has responded

Posts: 3562
Joined: 05-02-2006
Member Rating: 5.3

Message 207 of 293 (804961)
04-14-2017 2:21 PM
Reply to: Message 203 by forexhr
04-14-2017 12:32 PM

On the other hand, favorable outcomes of a particular organism were defined by the environment - favorable outcomes were DNA arrangements that contained information to cope with a given environmental condition while the total possible outcomes were total possible DNA arrangements.

Close, but not quite.

On the other hand, favorable outcomes of a particular organism were defined by the environment - ...

Yes! And how close a particular organism comes to meet those requirements as defined by the environment is called fitness. And it is the organism's fitness which natural selection works work.

... - favorable outcomes were DNA arrangements that contained information to cope with a given environmental condition while the total possible outcomes were total possible DNA arrangements.

No, favorable outcomes are those "DNA arrangements" being passed on to the next generation. And to the next, and to the next, etc. Though of course we're attaching value judgments to that outcome. You could be the only individual with an extremely favorable "DNA arrangement", but if it fails to promote your ability to pass it on to your offspring then it will die with you. Of course, if your siblings also have it and you ensure the survival of them and their offspring (ie, altruism) then that arrangement will survive.

Also, those "DNA arrangements", the genotype, are not what are selected. Rather, it is the phenotype produced by that genotype which possesses the property of fitness and which is selected for or not. It's the phenotype that is favorable or not, not the genotype.

This message is a reply to:
 Message 203 by forexhr, posted 04-14-2017 12:32 PM forexhr has not yet responded

Posts: 3562
Joined: 05-02-2006
Member Rating: 5.3

Message 211 of 293 (804967)
04-14-2017 7:28 PM
Reply to: Message 190 by forexhr
04-14-2017 3:04 AM

Percy writes:

Evolutionary programs are written by people, but they model evolution, not intelligent design. The programmer defines the "natural environment" so as to model the real world to the degree of accuracy necessary.
Just as an experimental biologist doesn't change selection into an intelligent process by manipulating an organism's environment, neither does a programmer by manipulating a program's "environment". The process modeled is still one of descent with modification and selection.

Wrong. Evolutionary programs all have something that is called active information(fitness function) which is a form of intelligent guidance.

Sorry, but you're the one who got it wrong. The fitness function is there as part of the evolutionary model, to model fitness. Which also makes your later denial of fitness in evolution also wrong.

The evaluation of an organism's fitness in evolution is based on its ability to reproduce and for its offspring to survive long enough to reproduce as well. Of course, said evaluation is implicit rather than a explicit step.

In the vast majority of evolutionary programs, while the fitness test is an explicit step, the target is not. For example, if you are trying to solve for a mathematical problem involving a large number of variables, then the fitness function would involve plugging each set of possible solutions (that being one "genotype") in which case the set closest to the solution would be rated with higher fitness. In the case of programming an FPGA to perform a function (such as a high-performance amplifier as in one experiment) then you program each possible solution into an FPGA and the one that works best would have higher fitness and would be selected to spawn the next generation of attempts.

Or there could just be no explicit fitness test. When you read my MONKEY page, at the bottom I discuss a few such programs. TBUGS sets up a feeding ground for a bunch of bugs who inherit feeding behaviors which can mutate. Toss the bugs into different kinds of environments where food growth differs and you soon see certain kinds of behaviors dominate all based on how the bugs can survive best. BTW, as I recall the "genes" for behavior are very simple, so behavior like "twirling" is not programmed in but rather is an expression of those "genes".

TIERRA is much more ambitious. It's a computer environment filled with virtual CPUs which start off with the same code for using computer resources (memory and clock cycles) to survive and reproduce. There was no explicit fitness function, as I recall. After a while, some of the CPUs became parasites who fed off of other CPUs in reproducing. Non-parasite CPUs evolved defenses against the parasites and there even arose a type of hyper-parasite which fed off of the parasites. In designing the original code and analyzing it for what could happen, the researchers determined the minimum length that code could be and still work. The CPUs evolved a far shorter code which worked, something that was far beyond the capability of their "intelligent designers".

Evolutionary programs are written to solve problems that people either cannot solve or find very difficult to solve. Many of them are engineering problems. All that the programmer can do is set up the environment and provide a fitness test, but that fitness test just measures performance, basically what happens in nature.

As both Dawkins and I pointed out quite explicitly, neither WEASEL nor MONKEY is an evolutionary simulation, so one might question calling them evolutionary programs. Rather, they implement two forms of selection, one of which is indeed modeled after evolution (ie, cumulative selection), in order to demonstrate and compare their capabilities. And I should point here yet again that both selection methods use the exact same fitness test, so why does single-step selection fail miserably while cumulative selection cannot fail?

In evolution this active information does not exist, meaning evolution can select only those individuals who manage to reach the right corner of the field. In other words, in the real world the path towards this corner is not guided but is carried by random means.

I don't know what your proposed simulation is supposed to simulate, but it certainly isn't evolution.

Fitness does indeed exist in evolution, so your statement is false. And your assessment of what it would take for evolution to succeed is also wrong.

So yes, evolutionary programs model intelligent design.

No, they do not.

To start with, IDists look at something complex in nature and call it "design". Well, I happen to be an intelligent designer, a software engineer, so I happen to know a few things about design. Intelligent designers do not create complex designs. For that matter, we try our best to reduce the level of complexity in our intelligent designs, striving instead of elegance. Complexity just breeds trouble and makes it so much harder to maintain the design and the product. Too much complexity is the sign of an incompetent designer.

As I said, one of the uses of evolutionary programs is to use evolutionary processes to create a design. From that work, we discovered something rather interesting, that a striking characteristic of the products of evolutionary programs is complexity. That amplifier FPGA code I mentioned. The product was highly complex, irreducibly complex since you couldn't make a single change without breaking it. It was so complex that it made use of the analog electrical characteristics of the FPGA's digital circuitry (hence it was not portable).

In software design, we joke that we use evolution. We want to create a product that does something, so we use as our baseline an existing program which does most of what we want, but which a few changes. Or we want to add a feature that's similar to an existing feature, so we copy that code and modify it. How does evolution create a new structure? By modifying an existing structure or by copying and modifying that copy (eg, creating a new protein). And as a result of using evolutionary design techniques, our programs become increasingly complex and increasingly difficult to debug or to maintain.

Don't you just love the irony? Complexity actually disproves intelligent design. So the next time you see something complex in nature, realize that that's a sure sign that it had evolved.

Edited by dwise1, : tweeked the final line

This message is a reply to:
 Message 190 by forexhr, posted 04-14-2017 3:04 AM forexhr has not yet responded

Posts: 3562
Joined: 05-02-2006
Member Rating: 5.3

Message 212 of 293 (804982)
04-14-2017 9:30 PM
Reply to: Message 184 by forexhr
04-13-2017 3:19 PM

My objections mean that in evolutionary programing, targets are a priori selected by intelligent agents. Without this information about the search space structure evolutionary programing does no better than blind search.


As we have already discussed, the vast majority of evolutionary programming experiments do not rely on a priori selected targets. Some do not even have fitness functions, but rather let the environment they had created do that as would happen in nature.

Evolutionary programming involves applying evolutionary processes to solving complex problems. Problems for which we do not know the solution and hence could not specify the solution in advance even if we wanted to.

You really need to learn something about evolution and about evolutionary programming. Then at least you might be able to raise an actual objection.

As it is, you sound like you're just parroting what some IDiots like Dembski have written. For example, I quote from Dembski's presentation at the April 2000 "Nature of Nature" conference at Baylor University, "Can Evolutionary Algorithms Generate Specified Complexity", because in it he grossly misrepresents who WEASEL (and hence MONKEY) work, falsely claiming that the program cheats. Practicing geologist and former young-earth creationist (former because of his field work in geology) Glenn R. Morton also attended that conference and reported on it. He reported that in the subsequent question and answer session, Dembski was faced with "[h]ands ... upraised all over the room" by people who worked with genetic algorithms and knew better than what Dembski had told them. Dembski's response? "Dembski had the deer in headlights look."

forexhr, do not be that deer. Learn something about the subject matter.

This message is a reply to:
 Message 184 by forexhr, posted 04-13-2017 3:19 PM forexhr has not yet responded

Posts: 3562
Joined: 05-02-2006
Member Rating: 5.3

Message 213 of 293 (804983)
04-14-2017 9:45 PM
Reply to: Message 184 by forexhr
04-13-2017 3:19 PM

My objections mean that in evolutionary programing, targets are a priori selected by intelligent agents.

You haven't answered my question. If that "target ... a priori selected by intelligent agents" is your "explanation" for the phenomenal success of WEASEL and MONKEY when they use cumulative selection, then why do they fail abysmally when they use single-step selection?

I'm an engineer (software), which means that I am frequently involved in troubleshooting problems and debugging code. One of the cardinal rules is to change one and only one thing at a time, then test for whether that solved the problem. The use of control groups in science experiments is for the same reason. If you change several things and the problem is solved, you still don't know which change was responsible.

MONKEY (actually, I forget whether WEASEL had a single-step selection mode) offers such a controlled experiment. You can choose single-step or cumulative selection. That's the only thing that changes. Everything else is the same, including having a pre-determined target string. So if there is any difference in the program's performance, it must be because of the only thing that changed, the selection method used.

So yet again, since they both use the exact-same fitness test, why does single-step selection fail so abysmally while cumulative selection succeeds so spectacularly?

This message is a reply to:
 Message 184 by forexhr, posted 04-13-2017 3:19 PM forexhr has not yet responded

Posts: 3562
Joined: 05-02-2006
Member Rating: 5.3

Message 217 of 293 (805122)
04-15-2017 10:25 PM
Reply to: Message 214 by forexhr
04-15-2017 9:36 AM

You have become so desperate, that you are now trying to project your ignorance of basic mathematics on me, by using an infantile sarcasm.

Maybe the idea is foreign to you, but not everybody enjoys Schadenfreude. Many of us actually feel empathy for the person who is incapable of understanding a simple and obvious concept like the Texas Sharpshooter Fallacy. And when that person does finally start to get it, we actually feel good for him and want to cheer him on.

And unfortunately for you, you are a creationist. The reason why that causes you problems is because creationists don't have any evidence to support their claims nor any valid arguments, so they have to rely on fallacies and false and misleading claims. A strong reason for your inability to understand the Texas Sharpshooter Fallacy is because you cannot allow yourself to understand it. It's either that or resort to deliberate lying because you do understand why your argument is false -- we have seen far too much of that from creationists.

BTW, my third bachelor's degree was in math. So if you think that I am so ignorant of basic mathematics, then do please explain what I got so wrong in my "MONKEY Probabilities" (MPROBS) document. You know that one, where I analyze the probabilities of single-step selection and cumulative selection in order to understand and explain why cumulative selection succeeds so spectacularly while single-step selection fails so abysmally. Interesting how hard you are working to avoid that question.

Oh, and could you please remind me what your qualifications are?

Although nobody witnessed the formation of a particular bio-structure, and therefore being able to define the "number of favorable outcomes" before its formation, this number is definable with reference to a particular environment. If this environment is the operator of Lambda phage genome to which lambda repressor binds, then the "number of favorable outcomes" are all functional lambda repressor folds that are capable to regulate the transcription of lambda phage genome, while the "total number of possible outcomes" are all possible 92-residue sequences. Given the study referenced in the O.P., there are 10^56 "favorable outcomes" and 10^119 possible outcomes (20^92) which gives P(A) = 10^56/10^119 =10^-63.

You are still talking about single-step selection. We already know that the probability of single-step selection succeeding is abysmally small. Using as an example MONKEY's default target, the alphabet in alphabetical order, the probability of success using single-step selection is 1.6244E-37 (1.622×10-37). I calculated how many attempts would be needed to bring that up to one chance in a million and arrived at 6.156×1027 attempts. To provide some perspective, I suggested using a supercomputer that can perform one million attempts per second (that was written in 1989 when the XT clone (Norton Factor 2) could only do about 200 attempts per second, but personal computers now are maybe a thousand times faster; I'll need to determine that). It would take that hypothetical supercomputer about 195 trillion (1012) years to perform those 6.156×1027 attempts. That would be nearly 10,000 times longer than the accepted age of the universe. Hence abysmal.

But nobody except a creationist would think that single-step selection applied to your "argument against evolution", since that is far more descriptive of creation ex nihilo. Rather, we would argue that it had evolved, which would mean that an evolutionary probability model would be in order, one that used cumulative selection.

If you want to come up with an argument against evolution, then it has to deal with evolution, not with something completely different as you have done. And to accomplish that, you really do need to learn something about evolution. Until you do that, you will only succeed in making yourself, your position, and your religion look foolish. I am asking you to not do that.

I will set up an experiment that will allow you to test your suggestion that there is a link between Darwinian evolution and evolutionary programming in a sense that the path towards the right solution in the real world in not carried by random means.

It isn't. Anybody who has any degree of actual understanding of evolution would know that. That you do not know that is obvious just from that statement. And if you don't understand evolution, then how could you possibly propose a valid simulation of it?

From the perspective of Darwinian evolution, enviotnmental condition is something to which an organism must adapt in order to improve its chances of survival and reproduction.

That's not how it works. This is not starting out very well.

If this enviotnmental condition is intron-exon gene structure then ...

Wrong. DNA is not the environment. DNA is not what is selected, but rather what would be what is produced when the DNA is expressed. Haven't you ever learned the difference between the genotype and the phenotype?

Yet again, this is why you really need to learn something about evolution.

But, you opposed my claim that in the real world the path towards the right solution is carried by random means.

Because it's wrong, as we have all tried to explain to you over and over again while you employed the typical creationist mistake of stubbornly keeping yourself from learning anything, because if you were to learn then you would realize how wrong you are.

Yes, there are random elements in mutation and recombination, but selection is very deterministic -- stochastic, but very deterministic. Yes, we would not be able to predict the actual results of evolution, because there are many different solutions that can arise. But every one of those different solution will work in very deterministic ways. We will be able to examine them after the fact and understand why they work and how they evolved, but if we were to start the whole process all over again, we would undoubtedly get a different result.

You keep trying to remove selection and fitness, which further demonstrates that you do not understand evolution. Please try to learn something about evolution.

And also learn something about evolutionary programming. Not from the creationists and IDiots, because they are lying to you.

So, there you go... try to solve this problem

What problem? You did not present one.

And do please stop trying to divert our attention away from my question to you and answer my question. You claim that my MONKEY works so well because it knows the target string. But that is also true of the single-step selection test that MONKEY also performs. So why does what you claim makes the cumulative selection test work also fails to work for the single-step selection test? My answer is that it is the nature of the selection methods themselves that accounts for that vast difference in performance. What is your answer?

This message is a reply to:
 Message 214 by forexhr, posted 04-15-2017 9:36 AM forexhr has responded

Replies to this message:
 Message 222 by forexhr, posted 04-17-2017 6:52 AM dwise1 has not yet responded

Newer Topic | Older Topic
Jump to:

Copyright 2001-2018 by EvC Forum, All Rights Reserved

™ Version 4.0 Beta
Innovative software from Qwixotic © 2019