Register | Sign In


Understanding through Discussion


EvC Forum active members: 64 (9164 total)
3 online now:
Newest Member: ChatGPT
Post Volume: Total: 916,786 Year: 4,043/9,624 Month: 914/974 Week: 241/286 Day: 2/46 Hour: 2/0


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   The Future of Artificial Intelligence: Can machines become sentient (self-aware)
Phage0070
Inactive Member


Message 3 of 51 (555711)
04-15-2010 4:53 AM
Reply to: Message 1 by DevilsAdvocate
04-14-2010 10:22 PM


I would say yes; or rather they will be able to accurately simulate sentience, which is arguably indistinguishable from the metaphysical state some might consider it to be.
The human brain has about 100 billion neurons. The computer I am posting from has 3.28 billion transistors in its CPU. In addition it cycles those transistors 2,830,000,000 times per second, while the human brain has analog chemical reactions taking place. In other words, the human brain's processing power still beats the pants off my CPU.
On the other hand, a computer can simulate at a rate significantly slower than that of a human. The reactions that take place in the human brain over a period of seconds could perhaps be modeled in weeks; after all, since we can control the speed of sensory input to the neural simulation it wouldn't have any way of telling otherwise.
Personally though, I think our most likely and beneficial method of progress in this area would be to build AI from the ground up. For most applications where an AI would be desirable a simulated human brain is not at all what we want. Humans forget things, they think inefficiently and sloppily, and they fairly often don't do what they are told. Instead we want an AI that can respond appropriately and creatively to unexpected situations, but that will absolutely follow guidelines without question.
The military aspects of AI are clear; once the decision is made to attack a target there are a plethora of decisions to attain that goal which can be automated based on fairly simple criteria. For instance, steering a bomb onto a laser dot is something already handled by computers. An extension of this would be driving a vehicle through an environment to a destination without striking obstacles. I don't see any clear dividing line between automation and AI as long as the AI never ignores the programmed rule set.
I would argue that the ability of computers today to multi-task is roughly analogous to being self-aware. So for the answer of "when", I would say it has already happened.

This message is a reply to:
 Message 1 by DevilsAdvocate, posted 04-14-2010 10:22 PM DevilsAdvocate has replied

Replies to this message:
 Message 4 by DevilsAdvocate, posted 04-15-2010 6:08 AM Phage0070 has replied

  
Phage0070
Inactive Member


Message 5 of 51 (555724)
04-15-2010 6:29 AM
Reply to: Message 4 by DevilsAdvocate
04-15-2010 6:08 AM


DevilsAdvocate writes:
I guess that is the key. What distinguishes sentience from non-sentience and is there a difference between real sentience and simulating sentience?
Thats like asking what the difference is between a duck, and a rock that is in all respects indistinguishable from a duck. I think the philosophical answer is that there isn't a difference at all.
DevilsAdvocate writes:
If the rules and processes are not there as they are in our brains to give us the capability to be self-aware aka sentient than no amount of processing power will automatically make them self-aware.
Right, but the question isn't about what magical rules and processes are required for sentience, it is about what rules and processes can *pass* for sentience. In that respect the determining factor (besides the tools themsevles) is the ability of the designer to fool their audience.
DevilsAdvocate writes:
Sentience is being self-aware of one's own existence, thoughts, feelings, etc.
A multitasking computer is aware of the multiple programs it is running and their output. It can dynamically adjust the time, if any, devoted to each task (thoughts) based both on current conditions and stored criteria. That criteria can be adjusted based on previous experiences, or communication with other entities (even other computers).
I'm not sure that self-awareness in such a sense can be compared to humans or animals directly. In one sense computers have precision in awareness of themselves far in excess of even humans. On the other hand humans would tend to describe them as "mindless" in that they cannot alter their programs except in the manner specified by their original creation.
I suppose this makes my definition of sentience hinge on the capacity to disobey.

This message is a reply to:
 Message 4 by DevilsAdvocate, posted 04-15-2010 6:08 AM DevilsAdvocate has replied

Replies to this message:
 Message 7 by DevilsAdvocate, posted 04-15-2010 8:21 AM Phage0070 has not replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024