DevilsAdvocate writes:Some humans also said we would never fly, go into outer space, etc, etc. But we have exceeded even our wildest expectations.
We still cannot fly. We had to change the meaning of "fly" to something that we can do, before we were able to fly. In the old sense, flying the way birds do it - we cannot do that.
Can we change the meaning of "sentience" to something that computers can do? Presumably we can, if we find that useful.
Let me respond in the form of a few questions:
1: Could we, in principle, build an artificially sentient system?
My answer - yes, sure. I don't see anything magical going on.
2: Is that principle computation?
My answer - no. I might be part of a small minority there, though I sometimes suspect that the majority of mathematicians and computer scientists are actually very skeptical of AI but choose not to engage in the public debates.
3: Is that principle intentionality (the issue that John Searle attempted to raise in his "Chinese Room" argument).
My answer - no, though that might at least vaguely point in the right direction. With the last two answers, I am probably a minority of one.
4: Is it even worth doing?
My answer - no. If it were easy to do, it would be worth doing for what we would learn in the attempt. However, it is going to turn out to be very hard, perhaps prohibitively hard. So there is no real payoff for building an artificially sentient being. Beside, the old fashioned way is more fun.
I have at least given a bit more detail of my thinking above.
AI, as currently done, is mostly an attempt to automate epistemology (the "theory of knowledge" from philosophy).
Epistemology is mostly nonsense.