|
Register | Sign In |
|
QuickSearch
Thread ▼ Details |
|
Thread Info
|
|
|
Author | Topic: How long does it take to evolve? | |||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5949 Joined: Member Rating: 5.3 |
It is still those kinds of complexity that I am identifying as anathema to good design.
I know what I meant when I said it.
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5949 Joined: Member Rating: 5.3
|
Most directly, I found on your website a program you ran to try and get the word monkey out of randomness. I understood the first part of your explanation, but lost you somewhere in the middle. I was hoping you could take the trouble to answer me in layman's terms, if possible. You need but ask. You should quote the part of the text you don't understand, so I can do a better job of answering. That would include telling me which page it's on. For example, you appear to be talking about my MPROBS page, http://cre-ev.dwise1.net/mprobs.html, in which I analyze the probabilities involved, but you could be talking about the main MONKEY page, http://cre-ev.dwise1.net/monkey.html. Knowing which page you got lost in the middle of would help me to help you. BTW, a few things about MONKEY. I wrote that program and the supporting documentation about 25 years ago. At the time, I was working extensively in Pascal and I think I was just about to start learning C and C++. So I wrote it in Pascal. That means that the source code that I provide is in Pascal. It is rather difficult to find support for Pascal anymore. Then I ran into the first problem with the executable. The program using timing functions which depend on a calibration routine that the startup code performs. Well, PCs are getting faster and faster all the time. It got to a point where PCs were just plain too fast for that startup code, causing its count to overflow and the program to crash. It took some searching, but I finally found a code patch to clear that up. Now I think that there's a new problem for the executable. I don't think that the newer 64-bit Windows systems will run it. They can run 32-bit programs, but not 16-bit. I'm sure that MONKEY.EXE is a 16-bit program, which means that it shouldn't be able to run under 64-bit Windows. I haven't tried it yet.{ABE: As you were! I did try it four years ago and it did not work. See the history at the top of the http://cre-ev.dwise1.net/monkey.html page.} In the meantime, I've rewritten MONKEY in C and I feel fairly confident that it's working. However, I haven't completed the conversion of MPROBS yet. I will also need to work out possible issues with distributing MONKEY.EXE. So the upgrades will take a while.
To state simply what I don't get is: I want to hypothesize that, for all practical purposes, one can never get complex organization from randomness. The monkey program, modeled after Dawkins methinks weasel sentence, was intended to demonstrate that with NS it is not as far fetched as it seems. Out of randomness, no, of course not! Out of an evolutionary process, now that's something entirely different. Evolution is not randomness. Neither Dawkins' WEASEL nor my MONKEY(which others have described as the most faithful implementation of WEASEL which Dawkins only described in his book, The Blind Watchmaker) simulates evolution. Neither simulates natural selection. Both have been criticized by creationists as failing to simulate evolution, which is something that both Dawkins and I explicitly denied in our original presentations of attempting. Rather, we were both demonstrating the difference in performance of two different forms of selection: single-step selection and cumulative selection. Both our approaches (which were essentially the same) were very abstract, which for me was fortunate since an abstract problem is much more conducive to mathematical analysis. And both forms of selection are tested in exactly the same way, so the differences we observe are entirely due to the method of selection. Single-step selection is where you attempt to assemble the string in a single attempt. If that attempt fails, then you start all over again from scratch. In one science show, it was like travelling from one corner of a chess board to the opposite corner by taking one huge random step and, upon failing in that, starting all over again from that same corner. Cumulative selection is where you take the previous best result and use that as the starting point of the next attempt. Thus instead of trying to assemble the entire string in each step, you make a small change to the previous best attempt. This approach also includes a population of attempts from which you choose the best of the litter (you could and should also try that with single-step selection, but that would not help since you'd through the entire litter away anyway). In that science show analogy, it would be like randomly choosing a few possible small steps and selecting the one that has gotten you closest to the target corner. Both by demonstration (cumulative selection succeeds in less than a minute whereas single-step would take several times longer than the age of the universe to succeed) and by mathematical analysis of the probabilities (cumulative selection with a population of 100 has 99% probability of success in 69 generations while single-step selection has extremely low probability of ever succeeding), cumulative selection is virtually certain to succeed rapidly whereas single-step selection is virtually impossible. The only connection with evolution is that, in comparing the methods of selection in creation and in evolution, it is obvious that creation uses single-step selection and evolution uses cumulative selection. That should come as no surprise since cumulative selection was itself modeled on natural selection and evolution. A famous biologist (I don't recall his name at the moment) has been quoted as saying that natural selection makes the highly improbable inevitable. After playing with cumulative selection, we can now see why that is. BTW, all the creationist probability arguments that I have seen use single-step selection to determine the probability of something evolving. Well of course the results they get are abysmally low! They're actually calculating the probability of it being created ex nihilo! Again, here's the basic evolutionary process:
Or to express it far more abstractly, as is done in genetic algorithms:
BTW, the application of genetic algorithms not only results in optimal solutions to complex engineering problems, but when applied to creating designs results in designs that are highly complex, even irreducibly complex.
What I don't get is, that the program was designed with "monkey" in mind. wouldn't it be more fitting to compare NS to trying to program some unknown word . It could be monkey, bird, or perhaps tyrannosaurus, but not programmed in advance. That is indeed what would be needed to attempt to simulate natural selection. Everybody who criticizes WEASEL or MONKEY zeros in on the teleogical problem, the fact that we know in advance what the target is and we test each attempt against that known target. Of course, that is not what evolution nor natural selection does, since natural selection is the immediate environment exerting its influence. My best response is to point out that we are comparing two different selection methods and by applying the exact same fitness test in both cases then the differences are due to the selection methods themselves and not to knowing what the target string is. As I said, I've started thinking through the specifications for such a project, but frankly it is not trivial. However, there have been simulations that do what you ask. Indeed, such projects are common in the field of artificial life. Read up on Thomas Ray's Tierra program (Tierra home page is at http://life.ou.edu/tierra/). He created an artificial environment within a computer and created an initial population of organisms whose genetic code consisted of an abstract programming "language" that handled such tasks as "eating" (consuming computer resources, abstractly of course) and reproducing (creating copies of themselves that were similar yet slightly different from their parents). One thing that happened was the evolution of parasites, organisms that had lost the ability to reproduce but that could infect another organism and use its genetic code to reproduce. Completely unplanned for; it just evolved on its own. Another thing was the invention of a form of reproduction that the designers had deemed impossible and that they later called "unrolling the loop." Go to the Tierra home page for much more detailed documentation and source code. Another much simpler simulation was Bugs, which was featured decades ago in a Scientific American "Mathematical Recreations" column, which included enough code for me to program it for myself -- that was back when all my work was in MS-DOS, so the graphics programming was specific to that environment. You had organisms called "bugs" that wandered through their environment feeding. Their "genes" specified their behavior, specifically in what manner they would move as they sought food; eg, zoomers that moved straight ahead, twirlers that would move in tight loops -- all movement was wrap-around, so a bug moving off the edge would reappear on the opposite edge. If a bug got enough food, it would spawn a couple offspring who were very similar yet slightly different, so new behaviors could emerge. If a bug couldn't get enough food, it would starve and die. What was different was how food would grow in the environment. If it grew uniformly, then zoomers would thrive since they were constantly moving to greener pastures whereas twirlers would soon starve as they overgrazed their little patch of used-to-be-green. If it grew only in one area, then twirlers that had hit upon that patch of green would thrive whereas zoomers would zoom out into barren wilderness and starve. Of course, none of those outcomes were programmed into the simulation, but rather they would result from the bugs' evolved new behavior selected for or against by the environment. The type of bug that would evolve and become prevalent was the type best suited for that environment. Now, back to MONKEY. It wouldn't be enough to allow for some random unknown word, since that would just return us to the teleology problem, it would just simply replace a letter sequence we could decide to use with one that we didn't decide upon. Instead, what we would need would be some practical problem that needed to be solved, such that each possible sequence would be tested for how much better it came to solving that problem. In visualizing a possible simulation, I keep falling back on graphics from a science journal (either from Science or Nature) that covered a conference among the first where punctuated equilibrium was discussed. It depicted the generation-by-generation gradual evolution as a serious of overlapping bell curves whose center was migrating taking the population with it. From that, I visualize the x-axis as representing each organism's fitness (ie, how its phenotype relates to the environment -- obviously, I'd have to take an n-dimensional orthogonal space relating an n-dimensional orthogonal genetic space to an n-dimensional orthogonal space of environmental factors and project it down to a single dimension to get that value, but I'd much rather just wave my hands a lot at this point). And the y-axis would be how much of the population is at that x-value. In a stable situation, we could expect most of the population to be clustered about the optimal fitness value with smaller portions falling further from that value. During reproduction, the population size would grow and the bell curve should also spread out as the population becomes more varied. However, selection would then shrink the population back down with most clustered about that optimal fitness value. That is an important point, since that shows that statis is not due to evolution having "stopped", but rather that evolution processes are still very much in effect and are actively keeping the well-adapted population in statis about that optimal fitness value. When the environment changes, the optimal fitness value should also change and that optimal point on the x-axis should shift either left or right. Now during selection the formerly optimally fit are less fit, whereas a portion of the population that used to not be so fit has become more optimally fit. Thus the mean portion of the new population will increase about the new optimal value and the outer fringe of the old population that is now even further away from the new optimal will be even less fit and should die off. Decades ago there was a science special hosted by pre-accident Christopher Reeve in which he stated about the varying rate of evolution change that the further away from that optimal fitness the population is, the faster it will change and then the closer it gets the slower the rate of change. My reaction to that was "Huh? What could possibly cause that?" Now with this bell curve model, that makes sense. Now, how to implement that in a simulation and how to report the results.
Now , the most likely answer I would expect is, that each step of NS is intrinsically "programmed " to stay put, as it provides some sort of advantage to the creature. But I don't get that either, as each step towards a "good thing" is hard to believe that it really helps that much. Ie, is a little snub of liver or kidney really so useful to an animal? Google on Weasel and "Royal Truman" for his creationist "refutation" of WEASEL. Part of that "refutation" was the claim that once a correct letter had been found, it was locked in place and not allowed to change again. The problem is that that was not a feature in WEASEL nor in MONKEY. If you run MONKEY.EXE (I forget whether it will still work) with a small enough population size in order to slow it down (try a population of 10), you will see it repeatedly approach the target and then back-slide away, something that would be impossible if all correct letters were not allowed to be changed. Also, in MPROBS I explicitly include the probability of a correct letter being selected and changed to an incorrect one. I was able to track Truman's mistaken idea down to something written by somebody else (Erick Sobel?) who had mistakenly introduced that idea of locking rings. I mention that because that's what is sounds like you're attributing to natural selection. Natural selection is not intrinsically programmed in any way. It doesn't even exist as an actual something, but rather it's the name we give to what we observe happening in nature: organisms that are better adapted to survival in their environments tend to be the ones to survive and to pass those better-adapted traits on to their offspring. No intrinsic programming, nothing to force them to "stay put". If a trait helps an organism to survive, then it will tend to remain in the population's gene pool, whereas a trait that detracts from survival will tend to be removed from the gene pool. As I suggested, in The Blind Watchmaker, Richard Dawkins deals with your question by describing the evolution of the eye (repeated very briefly by RAZD in his Message 101), which itself is a slightly different version of Charles Darwin's discussion which starts with (paraphrased) "how the eye could have evolved is impossible to imagine, but that is due to our inability to visualize it, since if we apply reason ... " and then he goes on for another two pages describing a long list of intermediate forms of the eye that do exist in various animals and which are functional as light-sensing organs of differing efficiency -- creationists frequently misquote Darwin about the eye, stopping with "is impossible to imagine". When a creationist talks about half an eye being useless, he almost never explains his reasoning. The best that I can piece together is that somehow he's thinking about taking a razor blade to an existing human eye and cutting out distinct parts, the loss of any of which would render that eye useless as an eye. A co-worker repeated the argument as he had heard of it as their calculating the improbability of an eye evolving by coming up with a probability for each separate part of the eye evolving all by itself separately from any of the other parts, such that the final product, the eye, would then be put together with each of the separately evolved parts. That makes absolutely no sense at all, but that is what I'm hearing. There is no way that each separate part could evolve as a component of an eye without being part of an eye every step of the way. Instead, the only model for the evolution of the eye is that the component parts evolve together as part of the visual organ as it is evolving. Similarly, what good is a heart without blood vessels? Ask one of the many invertebrates who have just such a circulatory system (eg, grasshoppers, as I recall). Serves it rather well, since it's too large to get oxygen to its body tissues by diffusion like an ant can. From there, a little extra tubing to deliver the outgoing oxygenated body fluid more efficiently to the rest of the body is a plus. Add more tubing, such as in getting the fluid back to the heart and lungs and it becomes even more efficient. That added efficiency enables other changes, though now with those other changes the body needs that added circulatory efficiency, such that going back to the original system could even prove fatal. What good is a three-chambered heart? If we lose the septum between our ventricles, we would be in a world of hurt; basically, that's blue baby syndrome. Having one ventricle means that oxygenated blood from the lungs gets mixed with de-oxygenated blood from the rest of the body, which greatly reduces the amount of oxygen getting out to the body. Bad news. Yet amphibians and reptiles only have one ventricle and running on partially oxygenated blood is no problem for them, since their metabolisms are much lower than that of us mammals. Plus their body size is much less. Since that's the stock that mammals evolved from, that's what we had started out with. What does it take to convert a three-chambered heart to a four-chambered? Grow a septum that divides the ventricle in two and that keeps oxygenated blood and de-oxygenated blood from mixing (some turtles almost do that with a muscular ridge that divides the ventricle when it contracts). Alligators and crocodiles are born with a three-chambered heart which then changes into a four-chambered heart when they grow to a certain size. With more efficient delivery of oxygenated blood you can grow larger and run a higher metabolism, becoming warm-blooded. But once you have made those changes, now you must keep that more efficient delivery of oxygenated blood. There are creationist claims citing such things as evidence against evolution. The basis of those claims is an underlying assumption that you need to have to fully formed modern organ before it can be of any use. A possible justification for that underlying assumption seems to be that modern organisms, such as people, are severely impacted if they don't have a fully formed modern form of that organ. That seems to lead to a corollary assumption that a less fully formed form of that organ would be useless, even though we find many examples of them in the wild in organisms who don't have the same requirements as people do, as I have just discussed above.
Ie, is a little snub of liver or kidney really so useful to an animal? You seem to be making that same kind of wrong assumption as those creationist claims. How useful an earlier form of an organ would be to an animal depends on the animal and its requirements. And indeed we do find in the wild those less advanced forms of those organs and more in animals whose requirements allow them to benefit from those less advanced organs. This could be a case where we would want to know what your reasoning was. What understanding of biology and assumptions were you using there? Now for a weird creationist claim. Bill Morgan delivered this in a debate video and his audience loved it even though it makes absolutely no sense whatsoever. Here's a written form of it::
quote: So then what's evolution, chopped liver? (apologies to the two chickens) On the face of it, it appears that he's claiming that for the chicken to have evolved, it had to have re-evolved everything, that it didn't inherit anything from its parents -- as I recall the verbal form, he made a really big deal about them having to have evolved compatible business ends. I do not understand what his underlying assumptions and misunderstanding of evolution could possibly be to support that claim. According to evolution, it takes many generations for one species to evolve into another, though that can be sped up by human intervention during domestication. Chickens evolved from jungle fowl. For this thought experiment, let's assume 100 generations for that jungle fowl to become a yardbird. So what were the parents for the first 100% chicken? They were 99% chicken, which is damned close. What's the difference between 99% chicken and 100% chicken? Hardly any at all. I doubt that anyone could look at a 99% chicken and a 100% chicken and be able to tell the difference. So what if there's only one 100% chicken (since women keep telling us that they're more advanced than men, let's assume it's female). There she is, the only 100% chicken in existence. With whom does she mate? With a 99% chicken male, of course. Why not? There's practically no difference between them so they're plenty interfertile. What about their sex organs, where did they get them from, did they have to re-evolve them? Of course not! They got those from their parents, of course, where else? And where did their parents get theirs from? From their parents, who got theirs from their parents, and so forth and so back all the way back to whatever original ancestral species of theirs it was that had first come up with the idea of sex organs. Same thing with their muscular, circulatory, respiratory, skeletal, etc systems. When we have a basic knowledge of how evolution works, we find such creationist claims as that to be incomprehensible.
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5949 Joined: Member Rating: 5.3
|
I understand. My own time to read and respond is limited between work and afterwork.
One of the basic points I was making is that for questions of whether something nascent could or could not work, we can look at nature to see whether it could work. So can the ability to perceive light work if the organism doesn't have a fully developed brain and could it benefit them (vertebrates are wrong for that question because they all have some form of central nervous system)? Yes it can, because we find so many invertebrates with light perception and whom it benefits. Even for ones with a nervous system so primitive that they couldn't process images and then think their way through what they're seeing? Yes, those exist too. A lot of their behavior is hard-wired as instinct. And research has shown that a lot of apparently complex behavior can be produced by a few very simple rules. Another point is that we need to avoid the mistake of projecting our own needs and experiences on other organisms. "Seeing" is different for us than it is for invertebrates and the benefits of "seeing" are different and the "infrastructure" support for "seeing" and acting on what we see is different. Of course, that requires us to learn about the other animals and how they work. But then learning all that is one of the biggest benefits of discussing creation/evolution.
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5949 Joined: Member Rating: 5.3 |
First, perhaps I should explain why I write the way that I do (on Facebook, one old friend even complained that reading my emails is like reading a college assignment). It's how I get to think about things, to think them through, to clarify my thoughts. For example, until I started explaining it to you, I hadn't really thought through how science is independent of questions of whether the ultimate origin of the universe was by purely natural or by supernatural means and hence what the consequences of that would be.
I think it's an outgrowth of a software debugging technique I developed in school. If my program had a bug that I couldn't figure out, I'd sit somebody down and explain to him what my program did step-by-step. It didn't matter whether that person understood what I was telling him. What did matter was that it forced me to go through my code and think about what each line did instead of looking at that code and already "knowing" what it did (ie, I knew what I had intended it to do, not what it was actually doing). It's the same idea as a writer having somebody else proofread something he wrote, since when we look at something we wrote we see what we "know" we had written instead of the words that are actually there (our brains play "fill in the blank" tricks on us all the time). Anyway, kind of an explanation and an apology, if necessary. And also, thank you for motivating me to make a couple updates to my site. As a working software engineer, it can be difficult for me to work on my many personal projects in "my copious spare time" (engineering inside joke, since we have so little spare time).
... and thus the monkey/weasel computer model would not apply to this first step.
And from Message 115That would be far, far, more organization than the word "monkey" right there. I get the feeling that there's still a bit of confusion about MONKEY. For one thing, it was never written to produce the word "monkey", even though you could choose that string if you wished. Rather, the task had reminded me of the frequently rehashed idea of an infinite number of monkeys banging away on typewriters producing literature (most commonly it's Hamlet, though the original formulation was all the books in the British Museum (see the quote below)). And it allowed me to include some interesting and humorous quotations, namely (from my MONKEY page):
quote:My personal favorite was Lennon and McCartney, since it's the creationists who have so much to hide, unlike me. Also, for your edification, RFC means "Request For Comment" which quickly turned into the de facto documentation for TCP/IP and the Internet. Over the years on April Fools Day, somebody will post a humorous RFC, such as RFC 2324 ("Hyper Text Coffee Pot Control Protocol (HTCPCP/1.0)", 1 April 1998) or RFC 2549 ("IP over Avian Carriers with Quality of Service", 1 April 1999) or RFC 6921 ("Design Considerations for Faster-Than-Light (FTL) Communication.", 1 April 2013). BYTE magazine used to have similar articles in their April editions in the 70's and 80's. See Wikipedia's April Fools' Day Request for Comments for more information and examples. That page's listing of RFC 2795 ("The Infinite Monkey Protocol Suite (IMPS)") provides a link to the infinite monkey theorem, which is what I was referring to by naming my program "MONKEY". As for what MONKEY is and is not, I found a readme file which was an email to a creationist I corresponded with back in 1990 -- it's the file, READ.ME, in the ZIP file you can download from my MONKEY page (and which I have just updated and will upload tonight). Rather than post it here, you can download and read it off-line. Basically, MONKEY is heavily abstract. You may want to have something simple and neat to study, but life is not the least bit simple nor neat. Life is very complex and very messy. What Dawkins had done with his WEASEL was to take one aspect of life to study, natural selection. To do that, he had to abstract its essential properties, thus arriving at cumulative selection. To test that abstract idea, he needed to abstract the shite out of the supporting life function concepts such as fitness (abstracted immensely to include a target string), viability, and reproduction. He even had to abstract away the distinction between genotype and phenotype and, of course, development (ie, the translation of a genotype into a phenotype, most commonly through embryonic development). And since I was repeating Dawkins' work, I had to perform the same abstractions. A common mistake others will make is to take MONKEY too literally and to try to apply it directly to life examples. It is far too abstract for that. Rather, it proves out the difference between single-step selection and cumulative selection and demonstrates how well a system using cumulative selection can perform and converge onto a solution. Life does not even begin to use "the monkey/weasel computer model", but rather life is subject to natural selection, which is a form of cumulative selection. So the correct way of looking at it is to say that evolution uses natural selection, that natural selection is a type of cumulative selection, and that we know from abstract mathematical studies of cumulative selection that systems that use it should have a very high probability of converging on solutions very quickly. Which begs the question of whether you understand natural selection. That is after all part of my larger question: how do others understand that evolution is supposed to work? I think that if we can learn that, then we can begin to understand how they form their ideas of what evolution could or could not do.
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5949 Joined: Member Rating: 5.3 |
Or maybe we just belabor the computer metaphor past its usefulness when we talk about the complex cellular machinery in living organisms. I do agree that the computer metaphor continually gets taken way too far. However, in the current discussion it may help Lamden understand something. He's trying to look at a particular length of DNA and figure out how much "information" it contains. That would be like looking at how many lines of code a program contains to figure out how much it does. At a very simple level, software contains conditional statements, AKA "if-then-else". There are entire sections of code that either will or will not be executed depending on certain conditions. Similarly in our DNA we have regulator genes that control whether other genes are active or not. So a straight count of lines of code or length of DNA will not give us an accurate idea of how much either "code" will do. There are also loops and similar control structures in software. For example, I was assigned the job of maintaining some Pascal code written by a programmer whose main experience was in FORTRAN. Her code performed the same operations repeatedly but with different variables and she wrote it that way, just as she was used to doing in FORTRAN. Her source code was over 40 pages long. I took those operations and put them into a procedure (AKA "void function" in C) and then replaced that code with a call to that procedure. My version was less than 8 pages long. By straight lines-of-code count, her version was more than 2.5 times larger than mine and so, by Lamden's apparent reasoning, should contain more than 2.5 times as much information than mine and do more than 2.5 time as much as mine. Yet both versions contained just as much information and did exactly the same thing. Size is not a reliable metric for determining power. Similarly, regulatory genes can cause genetic "code" to "loop". Dawkins discussed this in The Blind Watchmaker. Let's take a literalistic centipede with 100 legs (id est, with 50 body segments that each have a leg on either side). Does its genetic code need to have separate genes for each and every one of those body segments? No. All you need is code to make one body segment and regulatory genes to repeat that code 50 times. And as I recall, Dawkins' metaphor for genetic code was not that it contained instructions, but rather a blueprint specification. But then it's been more than 25 years.
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5949 Joined: Member Rating: 5.3 |
Everything living is a result of the DNA coding. If a human cell contains 6 ft of microscopic DNA coding, let us take an arbitrary guess at how much DNA would be needed to program for a simple light receptor- let's say 1/10 of a mm. (pick your own guess) That would be far, far, more organization than the word "monkey" right there. And less likely to happen than it is to have any word formed by shaking up a bunch of letters and pulling them one by one. So maybe IC or ID is not the right word.... let's call it a statistical improbability. It is towards this first step that I do not see how NS could aid in organizing. Your message seems to display some confusion of the roles of genetics and of natural selection, as well as a fixation on DNA (also seen in your Message 119). And trying to over-apply MONKEY directly to questions about life, which I've already talked about. How does evolution work? To make a general statement of how evolution is understood to work, we have these basic concepts:
Now we can look at how evolution works so that we can compare it with how you appear to be thinking about it. We start with a population of organisms. The wording chosen also assumes a species that reproduces sexually and also implicitly assumes those organisms to be animals.
Does that make sense?
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5949 Joined: Member Rating: 5.3 |
Cake recipe. Yeah. Though what I seem to remember more clearly was plans for making a bicycle. And the example of a jetliner with an emphasis on the idea that with a basic plan for making a fuselage section you could then make a stretch version of the plane pretty much just by increasing the number of sections (though obviously there'd be repercusions on the entire design).
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5949 Joined: Member Rating: 5.3 |
Let us take the eye, as a tribute to RAZD that mentioned it. Firstly, and most importantly, I imagine that even the most primitive light receptor is the result of a remarkable organization, be it natural or not. Again, look to Nature for examples of animals with some kind of light receptor and observe directly how much organization that requires. While Richard Dawkins in The Blind Watchmaker did a good job of presenting the stages of the evolution of the eye as observed in living species (in Wikipedia, see Evolution of the eye for graphics), that was just an elaboration of Charles Darwin's own presentation in the "Difficulties of the Theory" chapter of Origin of Species (1859) in the section, "Organs of extreme perfection and complication.":
quote: In subsequent editions, Darwin expanded that discussion to two or three pages of known examples. Also, it should be noted that this quote is frequently misquoted by creationists by always stopping at the end of the very first sentence. Just to point out, Darwin was talking about a single nerve ending close to the skin being enough to confer sensitivity to light. No complex structure before it could detect light, just an almost-exposed nerve ending. Those who had that nerve ending and could benefit from it would have been more fit and would have passed that trait on to the next generation. IOW, natural selection at work. Then any pigmentation in the skin over that nerve ending would serve to collect more light and amplifying the stimulation and so on. Those who had that pigmentation and whom it benefited would in turn cause that trait to be selected. And so on over the generations. Natural selection would work quite well here.
Secondly, the light receptor is still 100% useless without a brain capable of deciphering the light in to "message". Others have already discussed how bacteria and single-celled animals can respond to light without any brain nor any kind of nerve tissue. There is also the hydra, which is little more than a sack with tentacles, all of which (ie, the tentacles and the wall of the sack body) is only two cells thick, and which responds quite readily to tactile stimulus. It has no brain whatsoever, but rather a neural network, a network of nerves connected to each other. I do not know of it having any sensitivity to light, but it does react to stimuli without benefit of brains. More advanced invertebrates have ganglia, small clusters of nerves that begin to act like a brain, but comes nowhere near what laymen would consider a brain and certainly completely unsuited for what you would want a brain to be able to do with a visual image, since you're thinking of "webcam without a computer."
Think webcam without a computer. No, not a webcam. A webcam analogy would have to come much later. In the beginning of the evolution of the eye, it would not yet be capable of discerning an image, but rather it's more at the early stages of detecting the presence or absense of light and maybe some basic idea of direction. Think a single photo-detector, like the safety device for your garage door opener or the old arriving customer detector in stops that would ring a bell when you'd enter and break the light beam. A webcam would be an entire array of really tiny photodetectors, but this is just a single one.
(this point I actually heard from someone else, who likely heard it from some creation science guy or something like that. But I think it's a great point.). Sounds like the kind of thing a creationist would come up with. He doesn't really know what he's talking about and those who also don't know enough think it sounds great. Kind of like that "chicken or egg" argument Bill Morgan loves to tell and his audience thinks is really great stuff. Do you still think that webcam analogy is such a great point?
There would be no reason for NS to aid in the dominance or propagation until the brain was there No, natural selection would still operate with or without a brain. As we can see in photosensitive single-celled life. As we can see in simple animals with neural nets and no brains. As we can see in invertebrates with neural ganglia which are primarily disorganized clusters of nerves. An organism's traits that can be inherited by its progeny and that confer any degree of greater fitness, regardless of how slight, would still serve as grist for the mill of natural selection.
Thirdly, (back to my own thinking), even after deciphered in to a message, a light message requires further action from the brain. Does the light mean I should jump in to the fire, or away from the fire? A further impediment from allowing NS to help out . Huh? So now this invertebrate with nothing more than a ganglion must be capable of rational thought and problem solving? Huh?? Let's try a computer analogy again, especially since my initial technical training was as an electronic digital computer technician. Digital electronics use voltage levels that can be in two states, high or low (some outputs can have a third state, high impedence, which effectively disconnects them from the circuit; we will not refer to that again). We assign to those two voltage levels boolean values of true or false, or binary values of one or zero. There are three basic types of digital circuits based on the three basic operators of Boolean Algebra:
You can combine these three fundamental gates to create more complicated gates (NAND gates which invert the output of an ANA gate, NOR gates, XOR gates (exclusive-or, which only outputs true if the two inputs are different), flip-flops (a basic memory cell; you input a one or a zero and it remembers that value), counters (a series of flip-flops that will step through a count as you pulse it), registers (a series of flip-flops that store several binary digits (AKA "bits") that taken altogether form a number or an address), shift registers (registers that allow you to shift bits to the adjacent flip-flops; used to multiply or divide by two), adders (gates that add two bits together and produce a sum and a carry), etc. You can also connect gates to form a combinatorial network, which will generate output values in response to a given set of input values -- we will return to this idea shortly. For example, you could have photo-detectors and door switches providing inputs and have control and alarm voltages as outputs to detect a condition that you need to take action on (eg, the photo-detector for your garage door opener losing light, which means there's something standing there like a small child, as the door is descending will cause a control voltage that will stop the door and make it go back up). A more ubiquitous example would be 7-segment decoders, which cause a number to be displayed on an LCD displays by using the number's bits as inputs and generating outputs to each of the seven display segments to turn each one on or off (in my digital design class, designing that combinatorial network was one of our assignments). Add a counter to a combinatorial network and you have a sequential network. A digital clock is a good example of that. A computer combines all that and more. With a computer, we really up the ante way up with a CPU ("central processing unit", now also called a microprocessor with the simpler ones being called microcontrollers (the latter go into washing machines and microwave ovens)). The CPU reads numbers from memory (basically a huge two-dimensional array of flip-flops organized into sequential addresses, each of which accesses a register), deciphers them as instruction codes which it then uses to generate control signals which tell the computer's combinatorial and sequential and other digital circuitry to retrieve the other values from memory and perform the required operations on them, including storing the results in a particular register or memory location. The basic difference between combinatorial and sequential networks and computers is that those networks are hard-wired to only do the thing that they were designed to do, whereas the computer does whatever its program tells it to do. In order to change what a network does, you have to redesign it and completely rebuild it. In order to change what a computer does, you simply give it a different program. Indeed, computers have been called "the universal machine" because the same machine can do almost anything simply by giving it a different program. As a side-note, notice that the computer is constructed entirely of combinatorial and sequential networks -- I know because in tech school we chased sparks (ie, traced signals) through the logic diagrams of a functional computer. And while on the top level the computer can do almost anything, in the lower levels its programming was still hard-wired. Under the hood, most of it is still combinatorial and sequential networks. Now let's apply that to the brain and to ganglia and neural networks. We can only compare our brains to a computer in general terms, since our brain's method of reprogramming consists basically of rewiring itself. However, our brains are indeed capable of learning and of reasoning, things that in computers have to be simulated through software. But like the computer, our brains operate on different levels in kind of a hierarchy of circuits (I'm drawing here from the BTYE book, The Brains of Men and Machines). The top-most layer of the brain decides upon an action we want to take, then that decision goes down through layer after layer before that action is actually taken. This is required because the brain is rather slow and having to handle all the details at the top-most level would overwhelm it -- consider what happens when you try to learn a new dance step or other complex movement; when you have to think your way through it you cannot do it because you're too slow and clumsy, but as you move it "down into muscle memory" to where you don't have to think about it anymore, then you become faster and more adroit. At the same time, each successively deeper layer becomes increasingly basic and hard-wired. For example, in order to regulate the actual muscle tension and positioning of a part of the body, you have pairs of nerve networks that detect the muscle tension of opposing sets of muscles and increase or relax the tension appropriate to keep the body where you had decided you wanted it. Above that you have reflexes in which the body bypasses the brain and responds to a stimulus locally on its own; no reasoning anything out there! And even behaviorally within the brain, we have still certain instinctual behaviors and drives that can take control unless overridden by the conscious brain, difficult though that can often be. Perhaps a simpler illustration of this hierarchy of brain circuitry would be touchtyping, which I learned in junior high. The entire course of instruction consists primarily of creating muscle memory. You learn through constant drilling which letter can be reached by which finger by moving that finger up or down and keeping it over its home key. Then you drill on the most frequent words in English, especially the two-, three-, and four-letter words. And the most common suffixes by practicing words that contain them. The end result is that when you see a long word you don't usually type, you spell it out in your head and your fingers "know" where to go. And if that uncommon word ends in a common ending, suddenly your fingers speed up and rip through it. And the common words you never have to think about. That is all because with those drills you had built up that neural hierarchy of your brain the lower levels that knew how to type any given key and how to type each and every one of those common words, such that all your conscious brain had to do was to think of the word (even that could become unconcious such that you could just look at some text and transcribe it without thinking). I realized that the very first time I typed a paper for German class. Suddenly, I could barely type! I had to spell out each and every German word, even the most common ones. But by the time I finished typing that paper, I had committed those common German words to muscle memory. OK, that works for us brainy animals, which should also apply to various degrees to most all vertebrates (since they all have some form of central nervous system; that neural cord is a prerequisite for joining our club). But what about the brainless invertebrates? The ones with nothing more than a ganglion? In their cases, they primarily have just hard-wired neural networks with extremely little or no reasoning ability. Basically, a given set of sensory inputs will produce the same behavioral response. They should running almost purely on instinct. That would render moot your concern for them to be able to reason through a situation. Though that would also beg the question of human reactions to dangerous situations. Basically, when in danger we instinctively revert to "fight or flight". More exactly, blood flow to the neo-cortex is restricted and redirected to the limbic complex, thus shutting down our ability for rational thought and ramping up our emotional and instinctual responses. That is what strong emotions such as panic and rage does to our brains. That is why the military drills us on what to do in emergency situations, so that when the balloon does go up we know what to do. That is why A1C Stone responded immediately and effectively against that threat on that train. In our case, we can learn different responses to danger, new behaviors for when instinct kicks in. In the case of that lowly invertebrate, it's primarily all instinct. Instinct that it was born with. Instinct that it largely inherited from its parent(s).
Thirdly, (back to my own thinking), even after deciphered in to a message, a light message requires further action from the brain. Does the light mean I should jump in to the fire, or away from the fire? A further impediment from allowing NS to help out . OK, the invertebrate inherited its instinctual response to light. Like your garage door opener, it has a hard-wired instinctual response to light and to the lack of light. How did that instinctual behavior develop? By natural selection. Way back when, ber-great Ur-grandpappy invertebrate could sense light but didn't have any instinct for doing anything about it. OK, that's not quite right. There was an ancestral population with the ability to sense light and a variety of instinctual responses to it, including ignoring it. There were certain benefits to moving towards the light (photophilic) and certain benefits to moving away from it (photophobic). Actually, this presents a situation in which a population would split into two sub-populations and end up evolving into two species. The photophobes would benefit by moving into dark safe places away from predators (think cockroaches), while the photophiles would have some other benefit that I can't quite think of at the moment (possibly getting out into the open during the daytime in order to feed on flowers' nectar and pollen -- guess I'm moving towards moths at this point). In the resultant photophobe species, offspring could develop a more photophilic behavior, but that would make them less fit, plus I'm sure it would crimp their love life (moving away from potential mates). Photophile offspring that stopped moving towards the light would likewise lose their food source, etc, and be selected against. In both cases, natural selection would have established the species and maintain their traits. All that evolved before fire. What effect would that have? We know that moths are photophiles; think "drawn like moths to a flame." So we have a large population of moths that are photophilic and drawn to light and whose range is an area of 100 sq. miles. A party of humans arrives to camp for a week in their range. Some moths are drawn to their campfire and die, but most of the population are not close enough to be affected and so are not affected. OK, so now let's have the humans settle the area and populate it densely such that the entire 100 sq. miles is completely covered by them and all of them have campfires every night. That would severely impact the moth population. If they all have very simple behavior of moving towards any source of light, then they're goners. But if some of them have developed through mutation or recombination the ability to distinguish between different intensities of light, then they could distinguish between sunlight and the much dimmer firelight. That ability could have developed countless generations ago, but was neutral because it made no difference. But now it would be important. Another nascent trait could be a preference towards either brighter or dimmer light. So now those with the ability to distinguish between sun and fire and prefered the brighter sunlight would not be drawn to the fires. The end result would be a new species of moth. Because of natural selection. But to answer your basic question, jumping into the fire or avoiding it would depend on what your instinctual behavior to that stimulus is. And those whose behavior favored avoiding the fire would be favored by natural selection. Remember: natural selection happens. (like in Forrest Gump, sh*t happens) The actual results of natural selection will vary, but as long as life keeps doing what life does, selection will happen and evolution will happen. They never stop.
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5949 Joined: Member Rating: 5.3 |
At http://cre-ev.dwise1.net/monkey.html.
quote: To re-iterate, the main problem is that the newer 64-bit Windows systems (I've go two of them) refuse to run 16-bit applications, which is what the old MONKEY executable is. Therefore, I needed to provide an executable that the newer boxes could run. In order to do that, I had to recompile it with a 32-bit compiler. Since 32-bit Turbo Pascal compilers are hard to come by, I had to convert it to another language, like C. The other problem is that the original program used the conio library, which is not universally supported. Of the three development systems I have (MinGW gcc, Pelles C, Microsoft Visual Studio 2008), only MinGW gcc supports conio. So that's the one I used to build the new executable. MinGW gcc depends on a distributable Microsoft Visual C++ runtime library. In order to get around that, I chose the build option to link the libraries in statically. It did increase the size of the executable very noticeably as I would have expected. My understanding is that that should remove the requirement for that distributable runtime file. Unfortunately, I could be wrong. Doubly unfortunately, all the computers I have access to have MinGW gcc installed and hence also that runtime file. That means that I have no means to test the new executable. Therefore, if you encounter problems running MONKEY, please inform me of that fact and give me enough information to resolve the problem. Here's a false positive you may get. The other old farts ... er, experienced geeks ... will remember the Norton Utilities, a real treasure trove for geeks. Part of that was the Norton Index. The standard was the "true blue" IBM PC/XT (by "true blue", that means the actual IBM product and not a clone). The "true blue" XT ran at 4.77 MHz, while most all the clones ran at 8 MHz. Using the "true blue" XT as the standard, Norton assigned it a Norton Index of 1. Therefore, whatever Norton Index your PC got was the number of times faster than a "true blue" XT your PC was running. A clone XT had an index of 2, running twice as fast. I think an AT ran at 5. So what's the Norton Index of the current machines? I don't know for sure, but I think it's up around 2000. That means that MONKEY runs much faster than when I first developed it at a Norton Index of 2. When I first ran MONKEY on a newer machine, I thought it was broken. Nothing seemed to have happened and it reported that zero time had transpired. But it reported having arrived at the solution. It had, but it had done so too fast for the time counters. Sure had me going for a while. With a generation size of 100, you arrive at the solution far too quickly for it to register. You have to pare it down to something smaller, like 10, to be able to observe it approaching the target and then backsliding away, etc. BTW, for the amount of time the single-step selection method needs to reach one chance in a million to succeed, I postulated a super-computer capable of one million attempts per second. We're still not there yet. To the single-step selection display, I added a new statistic when you stop (with Esc, not space): the number of attempts per second. Currently on my Win7 box its just over 1800. Share and enjoy!
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5949 Joined: Member Rating: 5.3 |
I know the feeling. You're just getting old. Deal with it.
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5949 Joined: Member Rating: 5.3
|
We have already gone through a lot of discussion about natural selection acting upon phenotypes instead of with the genotypes, and a lot more.
When we speak of evolution, we end up speaking about speciation events, the formation of a new species -- see Wikipedia at Speciation. Back in 1983/4, I heard a very good presentation about "creation science" in which the speaker, Fred Edwords, cited the most radically rapid rate of speciation given by the most radical scientists. 50,000 years. I do not know what his source was and I do not know what it is based on. The context of his remark is that the standard creationist argument for being able to stuff so many animals onto Noah's Ark (yes, they are very serious about that!) was to postulate "basic created kinds" such as the basic "canid kind" and the basic "felid kind" and the basic "worm kind" and the basic "beetle kind", and then after the Ark landed each of those kinds underwent rapid "micro-evolution" to produce all the species, genera, and much higher taxa that we observe today. His argument was that whereas 50,000 years for a single speciation event is radically rapid by scientific standards, creationists are arguing for vastly more rapid evolution in their attempts to "disprove" evolution. Actually, it is much worse that. Le Baron Georges de Cuvier, the Father of Paleontology, in Napoleon's time was a staunch anti-evolutionist. He was also a young-earther. And he examined the mummies that Napoleon's army brought back from Egypt, including many mummified animals, and he could not find any difference between the mummies and modern animals and humans. Therefore, in all those thousands of years, no evolution had occurred. That shrinks the time for creationist "basic created kind" evolution not only down to near-zero, but also deep into the negative scale. But think about what you are trying to do. You want to somehow map out the genetic changes from a single-cell organism to a human. But the differences between a single-celled organism to a human are all measure in the phenotype. But you want to measure it through the genotype. That seems like a fairly major disconnect. There are a number of problems with what you are proposing. Here's one that I'm sure you didn't anticipate. To measure the genetic difference between that ancestral single-celled organism and humans, you need to get the genome of that ancestral single-celled organism and measure the differences between it and modern humans. Can you do that? No, because those ancestral single-celled organisms no longer exist. But, you say, yes they do! We can still find them. No, we can find their modern descendants, but not the original ones. A couple/few decades ago, an Australian medical doctor, Michael Denton, wrote a book, Evolution: A Theory in Crisis. This book became very popular with anti-evolutionists. But then after its publication he became aware through many conversations the gross errors that he had made. He said that if he were to write it again it would be very different, but he has no plans to rewrite it. Here is something I had written about it:
quote: So then, your best bet is to be able to take that ancestral DNA of a single-celled organism and compare it directly to modern-day humans in order to see exactly what genetic changes had occurred. But you don't have that information, do you? Edited by dwise1, : Added an extremely strategic period at the end of the first sentence of the third paragraph. Edited by dwise1, : Fracking fracking bill shite! That was intended to be a period, not a comma!!!!!
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5949 Joined: Member Rating: 5.3 |
Yeah, that's the basic problem. I want to be able to write the attempts and the counts and how close they're getting at particular places in the terminal display. Without that ability, it would just be scrolling like crazy and the user couldn't see a thing. He'd have to stop the run to see where it is and that would stop the run, self defeating. Plus I want it to be responsive to keystrokes instead of using standard C input functions that require you to hit Enter as well, which I'm sure is legacy behavior from the 1970's line-oriented teletypes being used as terminals.
The problem is that the ability to handle the console (terminal screen and keyboard) like that is not standard and is OS-dependent. It's done differently in MS-DOS than in Windows than on Linux than on a Mac (I'm sure). Windows has character-oriented console I/O (which is how this version of conio is implemented). Linux would require something like raw-mode terminal code or ncurses (which I understand encapsulates raw-mode terminal operations). And a Mac would just stare at you dumbfounded that you aren't trying to make this a GUI app. And if I did it as a GUI app, then it would be even less portable. Reminds me of a comedian I heard the other night. He has a BlackBerry but its syncing software only runs on a PC and he has a Mac. He calls technical support and she's no help. So buy a PC. Not an option. Find a friend who has a PC. All his friends have Macs. Use a PC in the public library. Yeah, so that all the people can stare at the idiot who is tech savvy enough to own a BlackBerry, but doesn't own a computer.
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5949 Joined: Member Rating: 5.3 |
A decade ago I helped my brother-in-law with printer problems on his Mac (it ran OSX and he loaded the OS9 drivers that came with his printer). As I understand it, OSX was built on top of BSD UNIX and I made very good use of UNIX to network my Windows laptop to it in order to transfer the printer drivers into his Mac.
While I was there, I did a little exploring. Actually, I was wanting to test-compile some sockets programs I had written, but the Mac required a special tools CD to install the compiler and apparently that had to be special-ordered. But I was able to find the terminal app, though as I recall it was buried rather deep and was hard to find. Though the text editor was even harder to find than the terminal -- my brother-in-law is buried deeply into a Mac mentality, so when I shared my discovery with him he couldn't begin to comprehend why anyone would ever want to create a text file. I do have pdcurses and the O'Reilly ncurses book, but haven't had the time to play with it in order to learn it. I could tackle that task in my "copious spare time", so you might see something by 2025. Or, since you've got the C source code now, you or somebody else who already knows ncurses could get the job done a lot sooner. Edited by dwise1, : Mac mentality gripe
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5949 Joined: Member Rating: 5.3 |
Nor is there any claim that bug eyes evolved into mammal eyes or that octopus eyes evolved into mammal eyes. Indeed, that is a big argument against "intelligent design". An actual designer is free to introduce new elements to his design, including going back and completely reworking portions of it ("going back to the drawing board"). That includes introducing components from other unrelated designs (eg, a couple decades ago, Plymouth Voyager mini-vans, an American design, could come with either a USA engine or a Japanese engine; we owned one and it only lasted 75,000 miles, unlike my Saturn which I had to retire just past 200,000 miles). For some odd reason, the "Intelligent Designer" of Life has never done that. Instead of going back to the drawing board or grafting in components from unrelated designs, He/She/It/They/Whatever always, without any known exception, conducted Himself/Herself/Itself/Themselves/Whatever-sel(f/ves) as if {ARRRRGHH!!! Let's stop that stupid ID pretense!} She were somehow constrained to restrict Herself to working with and modifying only that which already exists, to live with and work with every single wrong decision She had made from the very beginning, completely powerless to change even the slightest mistake.
quote: In other words, why is it that that "intelligent design" ended up looking exactly like evolution had done the job? Actually, I once read a criticism of the writings of the leading figures of the Institute for Creation Research (ICR), Drs Henry Morris and Duane Gish, what wrote the book!, and describing the lengths they had to go to to explain away why all the world keeps looking like evolution had happened.
|
|||||||||||||||||||||||||||||||||
dwise1 Member Posts: 5949 Joined: Member Rating: 5.3 |
Oh, sure, the ID proponent would say that the Designer was so perfect that She wouldn't have ever had to go to back to the drawing board. A perfect supernatural agent could do anything She wanted to do. Big Fracking Woop!
Secondly, while a supernatural designer is not bound to reuse developments, there is nothing preventing such a designer from doing so. And yet we never see that happening. That is what I'm talking about.
|
|
|
Do Nothing Button
Copyright 2001-2023 by EvC Forum, All Rights Reserved
Version 4.2
Innovative software from Qwixotic © 2024