Register | Sign In


Understanding through Discussion


EvC Forum active members: 64 (9164 total)
3 online now:
Newest Member: ChatGPT
Post Volume: Total: 916,806 Year: 4,063/9,624 Month: 934/974 Week: 261/286 Day: 22/46 Hour: 2/1


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   Current status/developments in Intelligent Design Theory
jasonlang
Member (Idle past 3430 days)
Posts: 51
From: Australia
Joined: 07-14-2005


Message 94 of 112 (224550)
07-19-2005 12:45 AM


Neural Networks, Genetic Algorithms and Intelligent Design
Warning - this post is a bit long. Please email me if interested in discussing, refuting or fleshing out any of these ideas.
just a couple of points about neural networks and genetic algotihms, and, later, a discussion on applying Dembski's logic on "No Free Lunch theorems" to creationism itself.
the basic network design which trains input set => output set (just one of the possible network configurations) is the 'back-propagation network'. It trains an arbitrary input => output mapping.
The network evolves it's own internal solution regardless of the competence of the user who defines just the inputs and corresponding outputs. Often the solution is counter to the intentions/beliefs of the 'designer' :- the system finds the underlying logic of the problem, unknown to the 'designer'. The solution may not even be understandable to the human 'designer' (eg too complex).
This internal representation (generated from a random stating state)could be said to display 'specified complexity', though there is no formal speficiation in it's design. It also could be said to display 'irreducible complexity' in that if any one connection between 2 cells is removed, then the network as a whole won't function as 'designed'.
There is also work being done in using genetic algorithms in the training of neural networks. The weights between neural cells become the 'genes' of the algorithm. With this technique, neural networks which might have taken tens of thousands of iterations to 'converge' on a solution can take only hundreds of genetic algorithms generations to find an acceptable solution. (showing the power of evolutionary ideas) Also, this combination of NN/GA programming has 2 other benefits -
1. the network can jump to solutions which never could have been considered using only the small increments of backpropagation system.
2. the output from the neural network can be used as input into some type of fitness evaluation/environment sim, allowing us to evolve the network with defined inputs but no need for defined outputs! Also, part of this 'undefined output' could be fed back into the inputs, thus evolving a loop/memory system without the need for any 'designer' to specify how this memory works - eg undefined output and only a partially defined input. This would evolve to maximise survival, but need not be explicitly specified, or known. Also, each time we ran the same system, different systems would evolve, showing that there's more information being generated than just the fitness function could explain.
So, as far as 'Intelligent Design' in these artificial systems is concerned (which have been created by intelligent agents, i.e. us) we have absolutely no idea how they operate, even though we have 'designed' them.
Dembski and No Free Lunch
-------------------------
Dembski uses the 'No Free Lunch' theorems to 'disprove' that evolution could have occured. Basically, his argument is that NFL is just as applicable to biological evolution as it is to genetic algorithms on a computer. I think it is possible to show that NFL is just as relevant, or more so, to ID as it is to ToE.
NFL states that any genetic algorithm is equally good as any other (including blind chance) when averaged over all possible fitness functions. The fitness function is comparable to the combination of the laws of physics and the environment.
So, for biological evolution, NFL could be restated as, no possible systems of evolution is better than any other, when considered over the entire spectrum of possible laws of physics and possible environments.
Notably, research has shown that NFL doesn't hold true for 'co-evolving' systems: i.e. systems in which the solutions evolve in tandem with either the fitness function or other features of the algorithm itself.
It would seem that biological systems are co-evolving systems: the fitness function changes in response to changes in the organism, landscape, form of the chromosome, etc. Systems with better evolutionary potential (eg 2 sex organisms) would have out-competed others over many generations, especially as the environment changed.
With biological systems the 'solutions' (eg genes sequences) are insepearable from the 'algoritms' (way those genes can mutate/replicate/crossover) and the 'fitness functions' (laws of physics, environment, and how genes interact), so it is clear that bioligical systems are co-evolving. The complex creatures we see today are those whose systems for evolving (eg sexual reproduction) led to more effective fitness functions, etc.
From the main Creationist viewpoint, God created all species in finalized form, and in this form they have remained ever since. It is clear that this can be expressed in the terminology of a genetic algorithm.
Implicit in genetic algorithms is a concept of starting state, and a system by which these states give rise to later states. In a standard computer GA implementation, the starting state is defined as a random sequence. In creationism, the starting state can be defined as 'Start in the form which God had in mind for you', and the 'God Algorithm' can be defined as 'procreate, but don't change - you are already perfect'. I'll leave the definition of the starting state of biological evolution to those discussing abiogenesis, though I believe it would be a state not completely random, but not completely ordered either.
It is clear that, given any particular Creation of God, that, averaged over all possible laws of physics and all possible environments the particular 'designed' being would die almost instantly, or do very poorly in all but an infintisemal environmental subset. i.e. the 'God Algoritm' would do no better than any other possible algorithm, including blind chance. This shows that NFL is just as applicable to Intelligent Design, as it is to natural evolution, maybe more so in that biological evolution is a co-evolving system, whereas Creationism is not.
The Creationist might argue that, given God choosing a different set of laws of physics, and resultant environments, he would have designed the creatures diffrently, to match. This does not, however, change the above point - the newly 'designed' creature would not survive in any of the other fitness functions (physics/environmental combinations) so NFL is still valid. This creationist argument would evoke the Anthropic principle (things are as they are, because they are the way they are), as well as the God-of-the-Gaps argument, neither of which is an explanation of anything.
So, the only Creationist way out of the NFL trap is 'co-creation' (a creationist version of the 'co-evolving' solution to GAs/biological evolution. This shows pretty conclusively that NFL is no more evidence for creation than it is for evolution.
If you read all this, good for you , this was a lot longer than i had intended.
P.S. I expect to be flamed unmercifully for any inconsistencies, factual errors, or just because ...

Replies to this message:
 Message 95 by Ben!, posted 07-25-2005 6:03 AM jasonlang has not replied
 Message 96 by Brad McFall, posted 07-25-2005 3:55 PM jasonlang has not replied
 Message 97 by jasonlang, posted 07-27-2005 2:58 AM jasonlang has not replied

  
jasonlang
Member (Idle past 3430 days)
Posts: 51
From: Australia
Joined: 07-14-2005


Message 97 of 112 (226644)
07-27-2005 2:58 AM
Reply to: Message 94 by jasonlang
07-19-2005 12:45 AM


Re: Neural Networks, Genetic Algorithms and Intelligent Design
In reply to message 95 by ben
> Anyway, I wanted to strongly disagree with your assesment of
> neural networks. Artificial neural networks are designed in at
> least three crucial ways:
Nowhere in my original post do I claim that artificial NNs are not designed, I actually use the word design several times myself.
What I meant to question was how far 'intelligent design' can be taken, given that the intelligent agent (us) in no way needs to understand what is happening internally in the network - a 'black box' situation.
I also stated is that they could be shown to exhibit both specifed complexity and irreducible complexity, but that, in this context, this cannot be taken any evidence of an 'intelligence' designer, due to the automated nature of the training process, which, ultimately defines the network.
> 1. Choice of input (and for that matter, output)
> Without doing this (or if doing it poorly), your network learns > poorly--if at all.
Choice of input/output is related to a real-world problem the human is trying to solve. If we specify a poor set of inputs/outputs then effectively we have asked the network to solve the wrong problem.
In this case we say the network has 'failed' to learn, when in fact it has learned exactly what it was given (if the network is sufficiently large). It may be that this solution doesn't generalize well due to being unrepresentative of the real-world data.
What's important is that the inputs in the training set need to be differentiable for each respective output (whether this is mathematically identical to 'linear seperability' is something I've not looked into, but would suspect to be true), and that the training set needs to be broad enough to cover all input eventualities.
The specific numerical values of the inputs and outputs are to a certain extent irrelevant. If, for example, one or more of the input values is scaled by some factor (consistent over the entire training set), then the resulting trained networks will compensate accordingly. So, there is an ability to compensate for form of input, as long as relevant information can be extracted.
This extraction of relevant information (wheat ) from the irrelevant (chaff ) is automated in the BP learning process. The human designer need not know what is relevant/irrelevant when selecting input/output pairs for the training set. Trial and error could conceivably get it right, or we could make the input literally every piece of information avalable and let the NN sort out which factors actually matter, though this of course would be very slow to train.
> 2. Learning mechanism
> There are some autonomous learning mechanisms (such as Hebbian > learning), but the most "popular" (backpropogation) is COMPLETELY > design-oriented. There's an external teacher, for goodness sake.
The true designer/teacher in the system could be said to be the input/output data, not the human, because the system will learn based on the i/o pairs regardless of what the human thinks is going on. I'd hardly call a list of numbers an 'intelligent' designer, though. And anyway, what are the odds that any one human knows precisely what data is in the complete training set, for non-trivial examples?
> 3. Network architecture
> Different networks excel in solving different types of problems. > If you choose the wrong architecture, you may not even get a
> workable solution. One excruciatingly simple example of this is > that of the two-layer perceptron; if the input/output sequences > are not "linearly separable", the network will fail to learn.
> Solving XOR is a classic example of this.
I think linear seperability is an issue for bigger networks too, but they represent higher-dimensional spaces to be partitioned (related to number of independent inputs) and a larger number of lines to partition the space (related to number and arrangement of cells).
The 'extra lines' of the 3 neuron XOR-solving network allow it to further partition the 2D search space, thus solving the XOR problem.
This article cites some research projects related to evolving network architectures (see section 2: pages 2 - 4) :-
http://citeseer.ist.psu.edu/cachedpage/634490/1
Evolved network architectures can apparently outperform hand-written ones. It wouldn't take a very complex genetic algorithm to evolve from one neuron to the 3 neurons (not counting inputs) required for the XOR function, and the user of such a NN-GA wouldn't have to even know what the final architecture was.
This message has been edited by jasonlang, 07-28-2005 10:53 AM

This message is a reply to:
 Message 94 by jasonlang, posted 07-19-2005 12:45 AM jasonlang has not replied

Replies to this message:
 Message 98 by mark24, posted 07-27-2005 3:48 AM jasonlang has replied

  
jasonlang
Member (Idle past 3430 days)
Posts: 51
From: Australia
Joined: 07-14-2005


Message 99 of 112 (226886)
07-27-2005 7:56 PM
Reply to: Message 98 by mark24
07-27-2005 3:48 AM


Re: Neural Networks, Genetic Algorithms and Intelligent Design
whoops sorry was meaning to reply to msg 95 not 94, sorted now thanks
This message has been edited by jasonlang, 07-28-2005 10:59 AM

This message is a reply to:
 Message 98 by mark24, posted 07-27-2005 3:48 AM mark24 has not replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024