Register | Sign In


Understanding through Discussion


EvC Forum active members: 65 (9164 total)
5 online now:
Newest Member: ChatGPT
Post Volume: Total: 916,439 Year: 3,696/9,624 Month: 567/974 Week: 180/276 Day: 20/34 Hour: 1/2


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   Ratio of Deleterious Mutations to Beneficial Ones
JonF
Member (Idle past 190 days)
Posts: 6174
Joined: 06-23-2003


(2)
Message 12 of 35 (719327)
02-13-2014 9:26 AM


This has been discussed extensively in various places, and a little here.
Mendel's Accountant (MA) is rigged to produce the result that its author desired. It's based on Sanford's oft-debunked "genetic entropy".
------------------------------------------
The simplest way to demonstrate that it's bogus is to consider the fact that there are lots of organisms with generation times that are much smaller than humans. If the MA prediction were correct, mice (with a genome about the same size as humans and 170x the generation time of humans) and Lord know what else would have gone extinct long ago. Sanford replied to this criticism as reported by Jorge Fernandez at the lost TWeb thread:
quote:
"All other things being equal, the population that breeds faster will accumulate mutations faster."
jcs - No, it is just the opposite, short generation times means more frequent and better selective filtering.
Occam's Aftershave destroyed this lunacy:
quote:
Which makes zero sense and is trivially easy to refute with their own program:
Run Mendel with two populations that are identical in every way (i.e genome size, mutation rate, selection pressure, etc.) except make one generation time 2x the other, say two per year year vs. one per year.
If you run them both for 1000 generations, both will end up with the same (lower) fitness level, but the two per year will only take 500 years to get there.
If you run them both for 1000 years, the once per year will end up in the exact same fitness as the first trial, but the two per year will have 2000 generations and end up with an even lower fitness level, if it doesn't just go extinct first.
These guys are busted, and they know they're busted. Now it's just a question of how far they can push this shit and how much money they can make before the errors become well known.
------------------------------------------
The reported runs are with very small population sizes. 1,000 individuals is a population on the crux of going extinct. Again as reported by Fernandez, Sanford claims the effect is seen in larger populations:
quote:
"The really important parameter, and the one that Sanford et al deliberately fudged on, is population size. Populations with small numbers run a much greater risk of accumulating dangerous genetic defects through recombination due to the genetic bottleneck problem. This is a well know issue in population genetics. As it turns out, a population size of 1000 is right around the 'knee of the curve' for extinction. Limiting the population to 1000 will guarantee to drive a population to extinction fairly quickly, p>1000 will still go extinct but much more slowly."
jcs- we have done larger populations - but it gets computationally expensive. What we see is that above 1000, larger populations only improve selection marginally - this does not solve the fundamental problem.
But others report differently, unfortunately mostly in the now-lost TWeb discussion. There's some graphs here (note the link at the end to the raw outputs) for a population of 3,000 and more realistic beneficial mutation rate and maximum beneficial effect 0.01, ten times larger than Sanford's. I'll post one:
Note the increasing fitness, the opposite of what Sanford reported.
------------------------------------------
The program assumes that effect of Very Slightly Deleterious Mutations (VSDMs) (which are not harmful enough to be selected against) is additive; i.e. 100 VSDMs are 100 times as harmful as one VSDM. There's no reason to believe that's true and lots of reasons to believe it's false.
The effect of beneficial mutations is capped at a very low number, 0.1%. This is unrealistic; although beneficial mutations are rare the effect of one can be large. And there's no accounting for Very Slightly Beneficial Mutations (VSBMs). If VSDMs add so should VSBMs. From the manual:
quote:
A realistic upper limit must be placed upon beneficial mutations. This is because a single nucleotide change can expand total biological functionality of an organism only to a limited degree.
"Total biological functionality", whatever that is, is not what determines reproductive success and selection. One single beneficial mutation can, for example, allow one to drink milk in adulthood which can have a strong impact on reproductive success.
Again as reported by Fernandez:
quote:
"The default value for the maximum beneficial value of mutations is much too low. Real-world estimates of positive selection coefficients for humans are in the range of 0.1, not 0.001."
jcs - That is easily re-set, but one has to consider if it is reasonable to realistically build up a genome by increments of 10% (I am speaking of internal complexity - not adaptation to an external environmental factor). I think that is like going up Mt. Improbable using a helicopter.
If anyone can figure out WTF Sanford means there please let me know. Looks to me as if he didn't understand the issue. As Zachriel commented:
quote:
Which goes to show that he doesn't understand his own simulation. Mendel's Accountant doesn't model "internal complexity". It purports to abstract selective differences.

  
JonF
Member (Idle past 190 days)
Posts: 6174
Joined: 06-23-2003


Message 13 of 35 (719329)
02-13-2014 9:51 AM
Reply to: Message 11 by herebedragons
02-13-2014 9:25 AM


I too downloaded it sometime ago, but it doesn't seem to work now
There were major and undocumented bug fixes between 1.2.1 and 1.4.1. Wesley Elsberry reported:
quote:
As demonstrated in the two runs I did comparing the output of v1.2.1 and v1.4.1 on the very same configuration, v1.2.1 has a major error in its handling of beneficial mutations. This has nothing at all to do with memory limits; I also ran both with the default case, and the experimental case used in both merely changed the two parameters as specified by Zachriel above. The memory usage was under 130MB for all cases I ran; the memory I had was sufficient and the simulations ran to completion. Sanford either was given a garbled account of the issue or is deploying a meaningless digression as a response.

This message is a reply to:
 Message 11 by herebedragons, posted 02-13-2014 9:25 AM herebedragons has not replied

  
JonF
Member (Idle past 190 days)
Posts: 6174
Joined: 06-23-2003


Message 22 of 35 (719386)
02-13-2014 8:16 PM
Reply to: Message 21 by Faith
02-13-2014 8:01 PM


Re: Neutral -- maybe not
You haven't seen any such admission, because it would be a lie.
UABF.

This message is a reply to:
 Message 21 by Faith, posted 02-13-2014 8:01 PM Faith has not replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024