I'm new to this particular forum, but I've been on some others before. My question is for any geologists out there. I am a chemistry graduate student, so I should be able to handle any amount of detail you're willing to throw at me; technical information is welcome.
On what assumptions or conditions does K-Ar dating rest, other than:
* that the rock being dated has not undergone a significant heating/cooling cycle since it was initially formed
* that the rock is impermeable to Ar, and so traps what Ar-40 is a result of radioactive decay and does not leach Ar-40 from the atmosphere
* that the rock is sufficently old for some buildup of Ar-40 (100K years or so)
I'm a bit confused because Geochron labs has their minimum date on the order of 0.5M years, while some webpages I'm finding at UCSB talk about K-Ar dating possibly being useful for dating rocks as young as 20K years.
I've skimmed some reports by creation "scientists" such as Steve Austin describing a mockery of this technique by using it to derive dates of up to a million years for historic lava flows. I use the words "up to a million" advisedly, because all of the supposedly erroneous dates fall nicely at the bottom end of the spectrum K-Ar is meant to test.
I also saw a defense of those techniques by some guy at AIG, but I didn't like his assertion that the error of the measurements being 60K years and the mean being ~300K years strongly indicated that it was the test itself, not the experiment, that was the problem. Is there some natural distribution to the amount of Argon present in rocks, such that we can measure the amount actually there to a finer precision than the natural variation? If so, how does this distribution arise? If not, why do we not meaure zero Ar-40 in rocks that were just coughed out of a volcano?
If possible, I'd like to design a quiz question for my freshman chemistry students on K-Ar dating, so I'm trying to know as much background as possible.
Thanks very much for your comments,
Biophysicist