Register | Sign In


Understanding through Discussion


EvC Forum active members: 63 (9161 total)
3 online now:
Newest Member: popoi
Post Volume: Total: 915,582 Year: 2,839/9,624 Month: 684/1,588 Week: 90/229 Day: 1/61 Hour: 1/0


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   Micro v. Macro Creationist Challenge
dwise1
Member
Posts: 5925
Joined: 05-02-2006
Member Rating: 5.2


(3)
Message 29 of 252 (812492)
06-16-2017 8:44 PM
Reply to: Message 28 by RAZD
06-16-2017 5:30 PM


There is another interesting aspect to all this complexity talk. Something that any professional designer (eg, an engineer) could tell you: Complexity is a designer's worst enemy and spells doom for the design.
Most of the life cycle of a design lies in the maintenance phase. That is where bugs get fixed and features get added. When a design becomes overly complex (eg, "irreducibly complex"), then maintenance becomes extremely difficult if not impossible. Therefore, the amount of complexity in a design can be used to measure how badly designed it is.
However, it turns out that complexity is an expected feature of a design that has evolved. Evolutionary processes and methods naturally generate complexity. Many engineers, especially software engineers, have accidentally employed evolutionary processes -- primarily in the "copy something that performs one function and modify it to perform a slightly different function" manner -- and they have learned from bitter experience that the result is a near-exponential growth of complexity in the overall design. The complexity of that "evolved" design increases to the point that hardly anybody can figure out anymore just exactly how it works. And the code has become so intertwined that a very simple change in one place can cause catastrophic changes in totally unrelated parts of the code. Hence this photo of a t-shirt in an engineer's office:
Complexity is anathema to design.
More formally, there have been experiments using evolutionary processes to "evolve" useful designs. In some, there are extra parts that don't do anything, basically "vestigial remains" in the more classical sense (to short-circuit standard creationist quips, I mean parts that serve no purpose at all, not parts that still serve some kind of purpose, just not the primary purpose it used to serve). But in some experiments, they ended up with a highly complex, "irreducibly complex" even, design which would have been impossible for any human designer to have created.
The one I remember is evolving the design of a particular kind of amplifier using a field-programmable gate array (FPGA). Now, in my professional work for over 25 years, most of our designs included an FPGA -- before that, the US Air Force had trained me in 1977 as an electronic computer systems repairman (AFSC 305x4 -- USAF uses a different designation now), so I do have some understanding of digital electronics (eg, in one civilian job, the electrical engineer only knew analog electronics, so he consulted with me, the software engineer, regularly about digital electronics). Basically, an FPGA is an array of logic circuits which you can program by loading into it a file that tells each element in that array what kind of logic circuit it is (eg, AND gate, OR gate, NOT gate (AKA "inverter"), flip-flop) and exactly how it is connected to all the other elements in that array. I have no direct experience with that, since it's the electrical engineers who work directly with FPGAs (my only involvement is that they define read and write ports into that FPGA that my software then communicates with to control it and to read its status.
I also learned a few things about electronics, both analog and digital, in Air Force tech school. The supposed dichotomy between analog and digital electronics is purely artificial. All electronics is analog. What digital electronics chooses to do is to define only two narrow voltage ranges as valid. Depending on the actual logic definitions (a very salient point in the Data Systems Technician Chief Petty Officer advancement exams, one which it took me a second time to finally figure out), you have two and only two binary values, 0 and 1, both represented by a rather narrow range of voltages. What about the voltages between them? My Air Force training called that "The Forbidden Zone", meaning that those are voltage levels that have no real digital meaning and should never happen -- AKA "ambiguious", which is death to digital.
So now back to this experiment. They were using this one FPGA. And they were "evolving" a programming file to download into this FPGA so that they could then evaluate its performance as an amplifier of whatever type they were shooting for -- obviously, their measure of fitness was how well it performed. The result of this experiment was an FPGA that functioned well as the kind of amplifier they were seeking
Funny things about that design:
  • It was "irreducibly complex", meaning that if you were to change any single element of that design it would cease to work.
  • The design that had evolved made use of the analog electronic characteristics of the FPGA. This is something that no human designer could have ever possibly included into his design.
OK, so here's a bugaboo that electrical engineers have to deal with all the time. It is never just pure electronics. Every wire contains some internal resistance. In addition, every wire contains some inductive reactance (ie, you pass a current through a wire, it's going to generate a magnetic field). And everywhere that any two wires come close enough to each other, such as in an inductive coil, you also have some capacitance. You thought that biology was messy? Try some time to get down to the lowest levels of electrical nitty-gritty (actually, electronics is still orders of magnitude less messy than biology is).
Despite all the quality control we can throw at it, the production of digital circuits still includes some variances. So long as you take these circuits and use them in the prescribed manner (eg, digitally), none of those variances will ever mean anything, will make no difference whatsoever. But the moment you open those minute variances up for exploitation (as in those experiments), then all bets are off!
The thing about that FPGA design is that it naturally, with absolutely no intelligent intervention whatsoever (since what human could have possibly worked out all those analog operations of the digital circuits of the FPGA), created a working design that was "irreducibly complex."
Therefore, complexity, even "irreducible complexity", is the natural product of evolution. Not of "design".

This message is a reply to:
 Message 28 by RAZD, posted 06-16-2017 5:30 PM RAZD has seen this message but not replied

  
dwise1
Member
Posts: 5925
Joined: 05-02-2006
Member Rating: 5.2


(1)
Message 30 of 252 (813145)
06-23-2017 3:20 PM
Reply to: Message 19 by aristotle
06-16-2017 8:43 AM


JonF writes:
Show your calculations or references.
"Elementary statistical theory shows that the probability of 200 successive mutations being successful is then () , or one chance out of 10 . The number 10 , if written out, would be "one" followed by sixty "zeros." In other words, the chance that a 200- component organism could be formed by mutation and natural selection is less than one chance out of a trillion, trillion, trillion,
trillion, trillion!" - Henry M. Morris, Ph.D.
As is standard in such creationist treatments, Morris' probability model assumes a single sequence of steps in which each step must succeed. That then produces a probability calculation of pn where p is the probability of one step succeeding and n is the number of steps. Since p is always less than 1, pn becomes smaller and smaller as n grows; eg, for p=0.5 and n=1000, pn = 9.3310-302 -- my understanding is that such a small probability is classified as "virtually impossible".
However, such a probability model does not describe what happens in evolution, which renders such creationist probability arguments moot and completely irrelevant. Creationists' probability arguments serve no purpose other than to deceive their audience.
Evolution doesn't work on a single individual, but rather on a population of individuals. Thus, instead of a single path, evolution uses multiple parallel paths. Thus the probability of a step (AKA "a generation") succeeding would be expressed by a calculation like P=1-(1-p)s, where p is the probability of success and s is the size of the population. That is to say, what is the probability that at least one individual in the population succeeds, which is the inverse (ie, q=1-p) of the probability of every single member of the population failing. So in a population of 1000 where the probability of success is 0.5, that means the probability of failure is 1-0.5 = 0.5. The probability of every individual failing is then 0.51000 = 9.3310-302. The probability of at least one individual succeeding is then 1-9.3310-302 = 1 approx. A probability of 1 is dead certainty.
Let's change the values a bit: let p=0.01 and s = 100. q = 1-p = 0.99. qs = 0.366 . 1 - 0.366 = 0.634.
Since others have named Morris' faulty argument as a lottery argument, let's look at the probabilities in a lottery. In California's Super Lotto Plus, five numbers are drawn from 47 and then one super-number is drawn from 27 balls. The probability of winning is 1 in 1,533,939 or 6.51910-7. Let's assume that every person in Calfornia buys one lottery ticket, which amounts to 41,416,353 tickets. We already know how unlikely it is for a given person to win, but what is the probability that somebody will win?
Well the probability that a person will lose, q = 1-p = 0.99999935 (virtual certainty). The probability that 41,416,353 people will all lose, q41,416,353, is 1.879510-12, fairly small. So subtract from 1 to get the probability that somebody will win and you get 0.999999999998120, virtual dead certainty.
If it's a slow half-week and only a million tickets are sold, then the probability that someone will win drops to 0.478. About 50/50, but still fairly likely, unlike the odds of a specific person winning.
The object lesson here is that when you try to use math to prove or disprove something, you must develop a math model that accurately describes that something.
We know from experience that the probability of a creationist constructing an accurate mathematical model of evolution is virtually zero.
Edited by dwise1, : Added 1,000,000 ticket case.
Edited by dwise1, : Added "the probability of a creationist constructing an accurate mathematical model of evolution"

This message is a reply to:
 Message 19 by aristotle, posted 06-16-2017 8:43 AM aristotle has not replied

  
dwise1
Member
Posts: 5925
Joined: 05-02-2006
Member Rating: 5.2


Message 38 of 252 (813348)
06-26-2017 9:32 PM
Reply to: Message 35 by CRR
06-26-2017 7:17 PM


Re: Everybody's got something to hide, except for me and my MONKEY
My page about MONKEY is at cre-ev.dwise1.net/monkey.html. I have also written about it in this forum, so you could do a search.
Such programs therefore don’t simulate realistic biological populations.
Neither WEASEL nor MONKEY were ever intended to. I highly recommend that you read my MONKEY page before you make more false assumptions.
When Dawkins described WEASEL, I couldn't believe it, so I had to test it. Since Dawkins didn't provide any source code (I think it was a form of BASIC running on a MacIntosh), the only way I could test it was to write my own. In order for it to run as exactly like his as possible, I used his description of the program to come up with a specification for mine. I wrote it in Turbo Pascal, since that was what I was working in at the time (1990). It took Dawkins' program the lunch hour to run, but I think that was because he used an interpreted language. Mine succeeded in less than a minute. I ran it repeatedly and it succeeded repeatedly, every time without fail. I still couldn't believe it, so I calculated the probabilities involved. As improbable as each individual step is, the probability that every single step would fail becomes smaller and smaller until it is virtually impossible for it to fail. I wrote those calculations up in a text file, MPROBS.DOC, which I included in a PKARC package and uploaded it to CompuServe. The rest is history, including the part where I ported it over to C.
Again, it is not a simulation of evolution! Rather, it is a comparison of two different selection methods. Re-read the first half of Chapter 3, "Accumulating Small Changes", from Dawkins' book, The Blind Watchmaker, where he describes two different kinds of selection to use to get a desired string:
  1. Single-step selection in which the entire final product is generated at one time and must match the target in order to succeed. If it fails, then the next trial must start all over again from scratch.
    This is the kind of selection that creationists think that evolution uses and they base all their probability arguments on it. They are completely wrong and hence their probability arguments are rubbish.
  2. Cumulative selection which is an iterative method. You start with a randomly generated attempt, but when that fails instead of throwing it away you make multiple copies of it with slight random changes ("mutations"), so that those copies are very similar to, yet slightly different from, the original, analogous to what happens in nature. Then you select the copy that comes closest to the target and use it to generate the next "generation" of copies. And so on.
    This comes much closer to describing selection as used by evolution (of course, since it was modeled after actual evolutionary selection).
So what WEASEL and MONKEY do is to compare how well those two selection methods work by giving them both the same problem to solve. Everything is kept the same except for the selection method. Single-step selection fails abysmally while it is virtually impossible for cumulative selection to fail. And the probability calculations in MPROBS explain why.
I recently added to the end of the page a brief discussion of programs which do try to model evolution. I also found that some creationists misrepresent how WEASEL works; I discuss that too. If you want to claim that Royal Truman is correct, then I invite you to show me in my source code and in my probability calculations where I am supposed to have done what he claims we do. Like I say, "Everybody's got something to hide, except for me and my monkey!"
Keep in mind that when I wrote MONKEY, we were still using XT clones. Most PCs now are at least hundreds of times faster, if not thousands of times. So when you run MONKEY, it will appear to succeed instantaneously. In order to observe the progressing towards and regressing away from the goal, pick a smaller number of offspring generated per iteration.
Here is Ian Musgrave's email to me about it back when all this was going down. As you can see, that was nearly 20 years ago. I remember having received a xerox copy of what Remine wrote about me and my MONKEY and I remember being amazed at how much he had misunderstood it.
BTW, I had written MONKEY about a decade before this and posted it in a library on CompuServe where it was constantly downloaded at least once a month for years thereafter. I explicitly asked for feedback and got none except that my numbering of the Markovian chain steps in MPROBS was off by one, and spurious complaints about teleological assumptions in my fitness test which both I and even Dawkins in his original presentation discussed and showed to be irrelevant -- ie, since the single-step selection model made the exact same fitness test, when why was it such an abysmal failure when cumulative selection was such a resounding success?
quote:
Subj: Re: MONKEY
Date: 15-Sep-99 22:01:35 Pacific Daylight Time
From: ********@******.****.***.** (Ian Musgrave & Peta O'Donohue)
To: **********.***
G'DAy David
At 21:31 13/09/99 EDT, you wrote:
>>> have come across your pascal program and was most impressed. May I have
>permission to link to your web page on this subject, and to make available
>the executable and source code to your program?<<
>
>Yes, certainly you may. Out of curiosity, may I ask how you had acquired
it?
> I think that I had uploaded onto two CompuServe forums almost ten years
ago.
> Your email and one I received last year are about the only feedback I have
>gotten about it.
I got it from Robert Williams <****@****.***>, to whom you passed it on
after you answered his query to the NSCE.
You may or may not know that Walter ReMine, in various internet forums (eg
talk.origins and sci.bio.evolution) and in his self published book "The
Biotic Message", claims that your simulation provides a clear example of
Haldanes dilemma. As Haldanes dilemma relates to the rate of fixation of
benefical gene substitutions in a population with a large genome ( > 50,000
or so genes, see http://www.gate.net/~rwms/haldane1.html for some details),
most people were fairly certain that this was not the case. However, Mr.
ReMine would never give details in the internet fora, and only say that the
program and output were detailed in the book. As this is a self published
book, it is actually quite hard to get a hold of. Fortunately, some-one did
get a hold of it, and the program he referenced was yours.
Quote:
" On p 235, n 52, ReMine states "David Wise's simulation, for the IBM
compatible PC under DOS, is circulated by the National Center for Science
Education - a major anti-creation organization." ... ReMine never names the
Wise program, though he does give a reference to the program's
documentation "(Wise, D., 1989)". There is no such listing in ReMine's
References section, but that's the sort of thing that happens to us
all."
Well, I ran your program, and as I expected, it doesn't demonstrate
Haldanes dilema. (It is a lovely version of the program, and super fast,
and I will be honoured to place it on my web site.)
As to the resolution of Mr. ReMine claim. Well, it turns out to be blinding
simple. Here is a quote from the relevant section of ReMines book.
p 235
"That method of mutation is not true to nature [used by Dawkins]. In nature
nothing counts mutations and assures exactly one in each progeny. A more
realistic type of mutation should be used in the simulation so that each
letter has a _probability_ of mutation. Suppose we use this correct method
of mutation while leaving the "average" rate unchanged (at 1 chance in 28).
This subtle correction to the simulation nearly doubles the time needed to
evolve the target phrase: to 86 generations."
p 236
"Then we reduce the reproduction rate to that of the higher vertebrates, say
to n=6. In a sexual species this would require the females to produce 12
offspring each. This is overly optimistic for many species. The simulation
then goes into error catastrophe and does not reach the target phrase. We
can eliminate the error catastrophe by lowering the mutation rate.
"Then by exploration we can find the mutation rate that produces the fastest
evolution. [footnote: in this case the optimum mutation rate is one in 56.]
With this optimal mutation rate, on average, the target phrase is reached in
1663 generations - that is 62 generations per substitution.
"Thus the simulation - with its numerous unrealistic assumptions that favor
evolution - is less than five times faster than haldane's estimate of 300
generations per substitution. Ironically, this suggests that Haldane was
too optimistic about the speed of evolution."
Can you see where ReMine has made his error? I actually wasted a couple of
hours comparing the effects of mutation rates on different programs before
I realized it, but it should have been blindingly obvious.
Here's the key line:
"Then we reduce the reproduction rate to that of the higher vertebrates,
say to n=6"
Well knock me down with a stick of mortadella and call me Jake. ReMine
doesn't know how these programs work. In your program, Dawkins original,
Wesley Elseberry's weasle.pl and my WEASLE4.BAS the "reproduction rate", ie
number of offspring, IS ALSO THE POPULATION SIZE!!!! Of course you will see
only slow substitution in any of these programs when you only have 5
offspring, as there is only a TOTAL POPULATION of 5 strings at any one time.
ReMines argument totally collapses, without even having to point out the
other, obvious problems.
>I had also written an analysis of the mathematics involved. I think I
called
>the text file MPROBS.DOC. Did you receive that as well?
I've got the whole thing in the self extracting archive you passed on to
Robert Williams. I'll make sure the whole lot is available with approriate
citation and a link back to your site.
>I'll give your web site a visit. Thank you.
I'll add your contribution to the site soon, probably this weekend, as well
as the solution to ReMines "problem". Thank you very much for your time.
PS. ALthough your program runs well on my lab 486, when I try and run it on
the office Pentium II I get
"runtime error 200 at 019A:0091" any ideas about this?
Cheers! Ian
The problem he reported in the PS was apparently an overflow error in the Turbo Pascal startup code, probably in a timer calibration loop, because PCs had gotten too fast. I was able to find a patch for it and fixed it. That's covered in the first section of my MONKEY page, Development History and Issues.
Again, that is at cre-ev.dwise1.net/monkey.html.
Edited by dwise1, : Changed subtitle

This message is a reply to:
 Message 35 by CRR, posted 06-26-2017 7:17 PM CRR has replied

Replies to this message:
 Message 39 by CRR, posted 06-26-2017 10:41 PM dwise1 has replied

  
dwise1
Member
Posts: 5925
Joined: 05-02-2006
Member Rating: 5.2


Message 40 of 252 (813371)
06-27-2017 12:56 AM
Reply to: Message 39 by CRR
06-26-2017 10:41 PM


Re: Everybody's got something to hide, except for me and my MONKEY
But will you now read the source and see what it's really about?

This message is a reply to:
 Message 39 by CRR, posted 06-26-2017 10:41 PM CRR has replied

Replies to this message:
 Message 41 by CRR, posted 06-27-2017 1:39 AM dwise1 has replied

  
dwise1
Member
Posts: 5925
Joined: 05-02-2006
Member Rating: 5.2


(1)
Message 42 of 252 (813374)
06-27-2017 1:58 AM
Reply to: Message 41 by CRR
06-27-2017 1:39 AM


Re: Everybody's got something to hide, except for me and my MONKEY
But single-step selection versus cumulative selection is very relevant. As such, it needs to be examined.
I forget. Were you one of those deluded creationists who tried to force single-step selection upon evolution with a terminally false creationist probability argument? Sorry, but it's hard to keep you all apart.

This message is a reply to:
 Message 41 by CRR, posted 06-27-2017 1:39 AM CRR has not replied

  
dwise1
Member
Posts: 5925
Joined: 05-02-2006
Member Rating: 5.2


Message 49 of 252 (813529)
06-28-2017 10:26 AM
Reply to: Message 46 by CRR
06-28-2017 12:11 AM


CRR at ~6200 years.
Is that just because you believe that your religion requires it? Or do you think that the physical evidence supports it?
Id est, do you hold that religious belief despite the evidence? Or do you claim to hold it because of the evidence?
I have seen many young-earth creationists insist that if their young-earth claims do turn out to be wrong, then they would have to through their Bibles into the dust bin and become atheists. Kind of extreme, but that is literally what a number of them insisted was the case, so I'm not making this up.
What would your reaction be?

This message is a reply to:
 Message 46 by CRR, posted 06-28-2017 12:11 AM CRR has not replied

  
dwise1
Member
Posts: 5925
Joined: 05-02-2006
Member Rating: 5.2


(1)
Message 80 of 252 (814446)
07-09-2017 11:28 PM
Reply to: Message 78 by CRR
07-09-2017 8:55 PM


Or were you asking why I think dogs would be genetically closer? Because we have in common many body tissues, organs, etc. that are built of similar proteins. From a common designer we would expect the genomes to contain many similar sequences.
Except that is not how it works. Those proteins being compared are functionally identical even though many of the individual amino acids can vary widely such that their amino acid sequences can differ greatly. It is the patterns of those differences that we are interested in. Yes, a common designer (or even a decent one, for that matter) would be expected to reuse certain proteins based on their functionality, but why would we ever expect that designer to also use such a wide variation of actual sequences. And why would he do that in such a deliberate manner so as to support the idea that those genomes are related to each other in exactly the way that we would expect if evolution actually happened? Is your trickster god's name Loki by any chance?
Here's a little something along those lines that I wrote in an email in 1996 to a creationist who had rehashed that tired old false probability argument about the probability of a protein 80 amino acids long just having them all fall into place at random in just that exact order that is needed for the protein to work. Two major problems with that claim:
  1. Nobody but a creationist would expect a protein to form in that manner. A protein would have evolved, which is an entirely different process than he described using an entirely different kind of selection with entirely different probabilistic results -- yes, I did refer him to my http://cre-ev.dwise1.net/monkey.htmlMONKEY.
  2. Proteins do not require one exact order, but a number of amino acids can occupy locations within the sequence.
I then wrote the following to illustrate that point:
quote:

Rather than brandying about a hypothetical protein, let's look at a specific
case. In the class notes of Frank Awbrey & William Thwaites'
creation/evolution class at UCSD (the Institute for Creation Research
conducted half the lectures and Awbrey & Thwaites the other half), they give
the example of a calcium binding site with 29 amino acid positions: only 2
positions (7%) require specific amino acids, 8 positions (28%) can be filled
by any of 5 hydrophobic amino acids, 3 positions (10%) can be filled by any
one of 4 other amino acids, 2 positions (7%) can be filled with two different
amino acids, and 14 of the positions (48%) can be filled by virtually any of
the 20 amino acids.
The sequence of the 15 specified positions is:
L* L*L* L*D D* D*G* I*D* EL* L*L* L*
Where:
L* = hydrophobic - Leu, Val, Ilu, Phe, or Met
Prob = (5/20)^8
D* = (a) Asp, Glu, Ser, or Asn
Prob = (4/20)^3
OR (b) theoretically also Gls or Thr
Prob = (6/20)^3
D = Asp
Prob = (1/20)
E = Glu
Prob = (1/20)
G* = Gly or Asp
Prob = (2/20)
I* = Ilu or Val
Prob = (2/20)
Remaining positions = any of 20
Prob = (20/20)^14 = 1^14 = 1
Total Prob = Prob(L*) * Prob(D*) * Prob(D) * Prob(E) * Prob(G*) * Prob(I*)
= (a) 3.05 x 10^(-12)
OR (b) 10.2 x 10^(-12)
Your own calculation of the probability of a functional order coming up (ie,
the standard creation science method) would be: (1/20)^29 = 1.86 x 10^(-38).
Comparing the lower probability to yours shows it to be 1.64 x 10^26 times
greater.

That just describes an active site. I believe we will find that most of a proteins sequence consists of a some active sites connected with a lot of structural sequences. Thwaites and Awbrey didn't come out and say it here, but I would expect that most of the locations in those structural sequences would accept any amino acid.

This message is a reply to:
 Message 78 by CRR, posted 07-09-2017 8:55 PM CRR has not replied

  
dwise1
Member
Posts: 5925
Joined: 05-02-2006
Member Rating: 5.2


Message 135 of 252 (814622)
07-11-2017 8:52 AM
Reply to: Message 89 by JonF
07-10-2017 8:37 AM


CRR writes:
Professor Dr Bernard Brandstater, Prof. Stuart Burgess, Professor Dr Ben Carson, Dr Raymond Damadian, Dr John Hartnett, Dr Raymond Jones, Dr Felix Konotey-Ahulu, Dr John Sanford, Dr Wally (Siang Hwa) Tow
I don't see any evidence that those people are creationists.
You would need to research into the biographies and writings of each and every person in that list in order to make that determination. Since CRR presented that list and claimed them all to be creationists, it is incumbent on CRR to provide that information.
I do know that plant geneticist Dr. John Sanford is and IDist and a YEC. The only reason I know that is that a local creationist cited him in a debate as a scientific source, making sure to repeatedly point to his PhD, without making even a single mention that he was also a YEC. He did the same thing with another PhD with absolutely no mention that she's a professional creationist and spokesperson for AiG. IOW, that creationist was deliberately lying to the audience.
You should have recognized Dr. Ben Carson, the one who believes that the pyramids were built to store grain and now heads HUD, a post for which he is ill suited.
CRR writes:
Not to mention all past scientists such as Faraday and Maxwell.
Yeah, not to mention. Scientists who were unaware of the ToE or did not attempt to use Biblical-based creationism in their scientific work were not creationists in the modern sense.
Excellent point. One that CRR completely ignored in his reply, Message 131.
So what's his point?
ABE:
Though the story of Dr. John Sanford raises a pertinent question. A person can be both religious, deeply religious even, and a scientist with no problem, just so long as he actually does science when serving as a scientist. That may require some compartmentalization. Certainly there is no problem with pursuing science motivated strongly by one's religious beliefs, but there is a problem with having religious motivation to subvert science.
So the question is whether being a creationist in the modern sense would interfere with one's ability to do science, to make significant scientific contributions.
In the Wikipedia article on Sanford he is described:
quote:
Formerly an atheist[18] from the mid-1980s, Sanford has looked into theistic evolution (1985—late 1990s), Old Earth creationism (late 1990s), and Young Earth creationism (2000—present).
Reviewing his significant work and accomplishments, they are all in the 1980's and 1990's, before he became a YEC.
That must mean something.
Edited by dwise1, : ABE

This message is a reply to:
 Message 89 by JonF, posted 07-10-2017 8:37 AM JonF has not replied

  
dwise1
Member
Posts: 5925
Joined: 05-02-2006
Member Rating: 5.2


Message 137 of 252 (814624)
07-11-2017 9:00 AM
Reply to: Message 131 by CRR
07-11-2017 5:40 AM


But was he a creationist in the sense of our discussion? Was he also a YEC who denied real-world evidence in order to promote a narrow and false theology in order to proselytize and to try to have laws passed for that narrow and false theology to be taught in school as science?
There is no inherent conflict between science and religion unless you misuse either or both.
There is no inherent conflict between creation and evolution unless you misrepresent and misuse either or both.
You seem to think otherwise. You seem to equate evolution with atheism, evidence by your calling talkorigins an atheist site. You are a YEC and equating your "evolution model" with atheism is an article of faith.
So if you truly think that Faraday was a modern-day YEC-type creationist, then please present the evidence. Or admit that Faraday has nothing to do with the discussion.

This message is a reply to:
 Message 131 by CRR, posted 07-11-2017 5:40 AM CRR has not replied

  
dwise1
Member
Posts: 5925
Joined: 05-02-2006
Member Rating: 5.2


Message 145 of 252 (814717)
07-12-2017 10:35 AM
Reply to: Message 144 by JonF
07-12-2017 5:49 AM


Re: Michael Faraday and James Clerk Maxwell
Yeah, those creationist "scientists who were creationists" lists are so stupid that they're just tiring.
Those lists are kind of along the lines of the arguments that strain all logic to arrive at a conclusion that there must be a god, at which point suddenly that indistinct "a god" becomes their specific god accompanied by their entire intricately detailed theology. How blazingly stupid!
Just because a scientist was Christian doesn't mean he followed creationists' particular narrow theology. Did they ever bother to ask him to be associated with them and their false theology?
Edited by dwise1, : Replaced impersonal "you" with "they" for clarity sake

This message is a reply to:
 Message 144 by JonF, posted 07-12-2017 5:49 AM JonF has not replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024