Understanding through Discussion

Welcome! You are not logged in. [ Login ]
EvC Forum active members: 66 (9049 total)
69 online now:
PaulK (1 member, 68 visitors)
Newest Member: Wes johnson
Upcoming Birthdays: DrJones*
Post Volume: Total: 887,622 Year: 5,268/14,102 Month: 189/677 Week: 48/26 Day: 0/11 Hour: 0/0

Thread  Details

Email This Thread
Newer Topic | Older Topic
Author Topic:   William Dembski [i]is[/i] Jello
Inactive Member

Message 1 of 1 (72696)
12-13-2003 2:12 PM

from: http://www.talkreason.org/articles/jello.cfm

Topics addressed in the field of philosophy fall into two categories. In the first category are topics that have not (yet) been subjected to a broad yet rigorous mathematical formalization. Accordingly, they are "just word arguments", and have not benefitted from the clarity and power that mathematical precision affords. Examples of topics in this first category are philosophies of art, music, and literature, as well as much of ethics, and other parts of the humanities.

By contrast, topics in the second category have been formalized, in a form generally perceived as capturing much of their essence. Such topics include much of what several centuries ago was called "natural philosophy" and is now collectively known as "science". This category also includes those issues in epistemology that were addressed by Gödel's incompleteness theorem and related uncomputability results.

In the past several years the issue of whether "inductive inference can justify inductive inference", puzzled over since at least the time of Hume, has migrated from the first category to the second. First in the context of supervised learning (what in statistics is called "regression" or "classification"), and later in the context of search algorithms, a body of results has been developed that quantify exactly how and when such induction-justifies-induction can(not) hold. Moreover, this formalization has generated results extending far beyond the original philosophical topic that formed its seed (just as happens with any other formalization of a philosophical topic). These results can be viewed as an extension of traditional Bayesian analysis, into a fully model-independent "geometry of induction". Once factors like the precise inductive algorithm to be used, and the prior probabilities and associated likelihood functions of the problem at hand are specified, the theorems of this geometry tell us what the associated performance of that algorithm is, and how it relates to performance levels that accompany different settings of those factors.

In this book, Dembski attempts to turn this category-change trick for the quasi-philosophical topic of whether "intelligent design" is a legitimate alternative to neo-Darwinism. Central to his approach is an attempt to leverage the recent formalization of the induction-justifies-induction topic. In particular, he relies on some of the "No-Free-Lunch (NFL) theorems" of the geometry of induction. These theorems, loosely speaking, say that the performance-weighted measure of domains in which some search algorithm A beats some contender algorithm B exactly equals the measure of domains for which the reverse is true. So, for example, in attempting to find a high point on a surface, a hill-ascending algorithm will perform no better than random search, and in fact no better than a hill-descending algorithm, over the space of all surfaces one might search. In short, according to these theorems there is no free lunch; without tailoring one's algorithm to the domain at hand, one has no assurances that that algorithm will perform well on that domain.

I say Dembski "attempts to" turn this trick because despite his invoking the NFL theorems, his arguments are fatally informal and imprecise. Like monographs on any philosophical topic in the first category, Dembski's is written in jello. There simply is not enough that is firm in his text, not sufficient precision of formulation, to allow one to declare unambiguously 'right' or 'wrong' when reading through the argument. All one can do is squint, furrow one's brows, and then shrug.

Nonetheless, there are several points intimately related to Dembski's work that bear emphasizing. First, biologists in particular and scientists in general are horribly confused defenders of their field. When responding to attacks from non-scientists, rather than attempt the rigor that the geometry of induction and similar bodies of statistics provide, they fall back on Popperian incantations, trying to browbeat their opponents into acceding to the homily that if one follows certain magic rituals---the vaunted "scientific method"---then one is rewarded with The Truth. No mathematically precise derivation of these rituals from first principles is provided. The "scientific method" is treated as a first-category topic, opening it up to all kinds of attack. In particular, in defending neo-Darwinism, no admission is allowed that different scientific disciplines simply cannot reach the same level of certainty in their conclusions due to intrinsic differences in the accessibility of the domains they study.

This intrinsic lower certainty of neo-Darwinism than (for example) that of quantum electrodynamics means that there is legitimate room for disputation concerning the history of biology on Earth. So if Dembski had managed to use the geometry of induction properly to quantify that some search algorithm occurring in the biological world had, somehow, worked better than all but the fraction 10^{-50} (say) of alternative algorithms, then there would be a major mystery concerning the modern biological mantra. This would be true regardless of whether neo-Darwinists had performed the proper rituals in settling on that mantra.

However, Dembski does not do this. The values of the factors arising in the NFL theorems are never properly specified in his analysis. More generally, no consideration is given to whether some of the free lunches in the geometry of induction might be more relevant than the NFL theorems (e.g., those free lunches concerning "head-to-head minimax" distinctions that concern pairs of algorithms considered together rather than single algorithms considered in isolation).

Indeed, throughout there is a marked elision of the formal details of the biological processes under consideration. Perhaps the most glaring example of this is that neo-Darwinian evolution of ecosystems does not involve a set of genomes all searching the same, fixed fitness function, the situation considered by the NFL theorems. Rather it is a co-evolutionary process. Roughly speaking, as each genome changes from one generation to the next, it modifies the surfaces that the other genomes are searching. And recent results indicate that NFL results do not hold in co-evolution.

It may well be that there is a major mystery underlying the performance of some search processes that one might impute to the historical transformations of ecosystems. But Dembski has not established this, not by a long shot.

“Faith may be defined briefly as an illogical belief in the occurrence of the improbable. . . . A man full of faith is simply one who has lost (or never had) the capacity for clear and realistic thought. He is not a mere ass: he is actually ill." ---H. L. Mencken

[This message has been edited by JIM, 04-10-2004]

Newer Topic | Older Topic
Jump to:

Copyright 2001-2018 by EvC Forum, All Rights Reserved

™ Version 4.0 Beta
Innovative software from Qwixotic © 2021