Hi aiguy! Welcome to EvCForum.
I have very much enjoyed your succinct and well-reasoned argument. In fact, I have enjoyed it so much that I intend to ruthlessly and unashamedly plagiarize it the next time I am confronted with a Dembskiite argument inre specified complexity and the rest of that rot
.
One side comment that seems to me to pertain to your OP that has not been discussed yet on this thread is a potential fallacy of equivocation that appears inherent in any argument concerning computer intelligence. To wit: we have but one single example of known "intelligence" (however we operationally define that term) to get on with - i.e., "human intelligence". It strikes me that we may be imposing artificial constraints on what constitutes "intelligence" in the first place. We are, in effect, saying that a computer can only be considered intelligent if it "acts like a human". Since even human behavior can only be predicted stochastically (in the aggregate, and with a large enough sample), why assume that an "artificial intelligence" would be human-like? Indeed, why should we assume that we'd even be able to recognize non-human "intelligence" if confronted by it? Even Alan Turing couldn't get beyond the problem of defining intelligence without relation to "human" intelligence - his "test" was based on the inability of a
human judge to distinguish between a human and a computer in a conversation.
I think this point has a profound implication for ID. They too appear to be defining intelligence only in relation to human-scale "intelligence". Admittedly, they provide their putative Designer with "awesome cosmic powers" (albeit stuck in Behe's "itty bitty living space"
[/end Aladin mode]). However, I submit they are merely magnifying
existing human capabilities - not postulating something beyond human ken (or beyond human intelligence). There is no inherent reason why a computer - which after all in many areas already exceeds the capabilities of an un-augmented human (especially when we're dealing with distributed processing systems or GA, etc) - couldn't develop
something that we could neither recognize nor understand, IMO.
I'd be interested in reading your take.