So, to take this a bit further, what is missing and how can we try to add it in to set us on the path to actual successful AI?
Consider something as “simple” as a mosquito. It barely has a brain at all but is able to navigate a potentially hostile world pretty successfully on sensory input and an instinct for self preservation. Navigate the world in ways that I don’t think the most sophisticated current AI can achieve.
If we take sensory input, self preservation, goals like survival and procreation and then layer complexity and processing power over those do we start to build up to more human like intelligence?
Is a sense of self (and ultimately self awareness) a logical extrapolation to the goal of self preservation?
In more complex social animals (dogs, apes etc.) is not just a sense of self but a sense of other “minds” with their own competing or cooperative agency essential for navigating the social environment?
Can we look at different stages of brain complexity and processing power in different animals and see the sort of things that drive the kind of intelligence we might hope to achieve with AI (self awareness, possession of a theory of mind) and learn from those the direction in which to take AI?
Can we look at what evolution has done over millenia and shortcut it to create intelligent machines?