Hey,
Just stumbled across your message... the best way to get a reply from someone is to reply to their message directly. That way they get a notification of a direct reply. Especially when you're posting in a rather old thread, to somebody who isn't visiting the board every day.
As for your post... you address a lot of issues, but I prefer to focus on single issues rather than address tons of issues in one post; the replies (and replies to replies) just get unmanageably long, in my experience.
Anyway, I wanted to strongly disagree with your assesment of neural networks. Artificial neural networks are designed in at least three crucial ways:
1.
Choice of input (and for that matter, output)Without doing this (or if doing it poorly), your network learns poorly--if at all.
2.
Learning mechanismThere are some autonomous learning mechanisms (such as Hebbian learning), but the most "popular" (backpropogation) is COMPLETELY design-oriented. There's an external teacher, for goodness sake.
3.
Network architectureDifferent networks excel in solving different types of problems. If you choose the wrong architecture, you may not even get a workable solution. One excruciatingly simple example of this is that of the two-layer perceptron; if the input/output sequences are not "linearly separable", the network will fail to learn. Solving XOR is a classic example of this.
As a final note, I would love it if you can post some references to networks working with genetic algorithms. I read a paper done by collaborators of Dr. Jeff Elman (someone whose work I personally hold in very high esteem), and I thought it was a really interesting start. The paper was at least a few years old, and I'd be interested to check out any articles that you might recommend.
Thanks!
Ben