Understanding through Discussion


Welcome! You are not logged in. [ Login ]
EvC Forum active members: 85 (8945 total)
35 online now:
Newest Member: ski zawaski
Post Volume: Total: 865,185 Year: 20,221/19,786 Month: 618/2,023 Week: 126/392 Day: 39/87 Hour: 4/8


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   The Future of Artificial Intelligence: Can machines become sentient (self-aware)
DevilsAdvocate
Member (Idle past 1412 days)
Posts: 1548
Joined: 06-05-2008


Message 1 of 51 (555685)
04-14-2010 10:22 PM


As a computer science major and science fiction fan, I find the field of artificial intelligence intriguing.

My question is with the rapid miniaturization of computer chips, increase in processing power and memory size, and introduction of novel innovations such as neural networks, nanotechnology and quantum computing, do you think that machines well become self-aware (sentient) and if so how and when?

I am particularly looking for intelligent input from subject matter experts in these fields. I am also interested in the military aspects of artificial intelligence and possible ramifications. Please stay away from the dooms day, Terminator references. I am looking for constructive debate and discussion.

I look forward to an interesting discussion.

Edited by DevilsAdvocate, : No reason given.

Edited by DevilsAdvocate, : No reason given.

Edited by DevilsAdvocate, : No reason given.


“One of the saddest lessons of history is this: If we've been bamboozled long enough, we tend to reject any evidence of the bamboozle. We're no longer interested in finding out the truth. The bamboozle has captured us. It is simply too painful to acknowledge -- even to ourselves -- that we've been so credulous.” - Carl Sagan, The Fine Art of Baloney Detection

"You can't convince a believer of anything; for their belief is not based on evidence, it's based on a deep seated need to believe." - Carl Sagan

"It is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring." - Carl Sagan, The Demon-Haunted World


Replies to this message:
 Message 3 by Phage0070, posted 04-15-2010 4:53 AM DevilsAdvocate has responded
 Message 6 by nwr, posted 04-15-2010 8:07 AM DevilsAdvocate has responded
 Message 12 by CosmicChimp, posted 04-15-2010 10:37 AM DevilsAdvocate has not yet responded

  
AdminSlev
Member (Idle past 2951 days)
Posts: 113
Joined: 03-28-2010


Message 2 of 51 (555692)
04-15-2010 1:21 AM


Thread Copied from Proposed New Topics Forum

  
Phage0070
Inactive Member


Message 3 of 51 (555711)
04-15-2010 4:53 AM
Reply to: Message 1 by DevilsAdvocate
04-14-2010 10:22 PM


I would say yes; or rather they will be able to accurately simulate sentience, which is arguably indistinguishable from the metaphysical state some might consider it to be.

The human brain has about 100 billion neurons. The computer I am posting from has 3.28 billion transistors in its CPU. In addition it cycles those transistors 2,830,000,000 times per second, while the human brain has analog chemical reactions taking place. In other words, the human brain's processing power still beats the pants off my CPU.

On the other hand, a computer can simulate at a rate significantly slower than that of a human. The reactions that take place in the human brain over a period of seconds could perhaps be modeled in weeks; after all, since we can control the speed of sensory input to the neural simulation it wouldn't have any way of telling otherwise.

Personally though, I think our most likely and beneficial method of progress in this area would be to build AI from the ground up. For most applications where an AI would be desirable a simulated human brain is not at all what we want. Humans forget things, they think inefficiently and sloppily, and they fairly often don't do what they are told. Instead we want an AI that can respond appropriately and creatively to unexpected situations, but that will absolutely follow guidelines without question.

The military aspects of AI are clear; once the decision is made to attack a target there are a plethora of decisions to attain that goal which can be automated based on fairly simple criteria. For instance, steering a bomb onto a laser dot is something already handled by computers. An extension of this would be driving a vehicle through an environment to a destination without striking obstacles. I don't see any clear dividing line between automation and AI as long as the AI never ignores the programmed rule set.

I would argue that the ability of computers today to multi-task is roughly analogous to being self-aware. So for the answer of "when", I would say it has already happened.


This message is a reply to:
 Message 1 by DevilsAdvocate, posted 04-14-2010 10:22 PM DevilsAdvocate has responded

Replies to this message:
 Message 4 by DevilsAdvocate, posted 04-15-2010 6:08 AM Phage0070 has responded

  
DevilsAdvocate
Member (Idle past 1412 days)
Posts: 1548
Joined: 06-05-2008


Message 4 of 51 (555721)
04-15-2010 6:08 AM
Reply to: Message 3 by Phage0070
04-15-2010 4:53 AM


Phage writes:

I would say yes; or rather they will be able to accurately simulate sentience, which is arguably indistinguishable from the metaphysical state some might consider it to be.

I guess that is the key. What distinguishes sentience from non-sentience and is there a difference between real sentience and simulating sentience?

Here is an example of simulating sentience. Talking with a chatbot here: AI Research yesterday, at first it seems pretty intelligent but than you can see that this online program is just parsing your words and regurgitating well constructed responses based on your own responses. There is no actually sentience or rational thinking here. This is what is termed in the AI world "weak AI" as opposed to an actual thinking and self-aware machine, "strong AI" (which has yet to be achieved).

The human brain has about 100 billion neurons. The computer I am posting from has 3.28 billion transistors in its CPU. In addition it cycles those transistors 2,830,000,000 times per second, while the human brain has analog chemical reactions taking place. In other words, the human brain's processing power still beats the pants off my CPU.

Yes, but raw processing power and memory does not equate to sentience. If the rules and processes are not there as they are in our brains to give us the capability to be self-aware aka sentient than no amount of processing power will automatically make them self-aware.

Personally though, I think our most likely and beneficial method of progress in this area would be to build AI from the ground up. For most applications where an AI would be desirable a simulated human brain is not at all what we want. Humans forget things, they think inefficiently and sloppily, and they fairly often don't do what they are told. Instead we want an AI that can respond appropriately and creatively to unexpected situations, but that will absolutely follow guidelines without question.

Agreed. Maybe we are approaching this in the wrong way? Maybe artificial life/intelligence needs to evolve in much the same way as real life/intelligence through a machines self-evolution of intelligence vice simulating human intelligence based on a set of rules. The real question becomes how do we get a machine to think on its own?

The only problem with this is once machines start thinking on their own, can we control them and what restrictions should we place on them? What role will moral and ethical behavior have on these machines if any at all?

An extension of this would be driving a vehicle through an environment to a destination without striking obstacles.

DARPA is already doing this: DARPA Grand Challenge

The military is already incorporating this technology to remove the fragile human element from many tedious and dangerous military tasks. The Navy is working on pilotless drones to take off and land on aircraft carriers. One called Firescout is already in service and can land on destroyers, cruisers, ac carriers and other air-capable ships.

The Army and Marines are working on AI land attack machines, etc.

would argue that the ability of computers today to multi-task is roughly analogous to being self-aware

Sentience is being self-aware of one's own existence, thoughts, feelings, etc.

I think there is a vast difference between where machines are at today and the level of sentience that humans are capable of. I think machines are sentient on a level of some lower intelligent lifeforms i.e. fish, insects, etc but not even on par with most mammals.

So how does this work on the animal level? Many animals especially the higher intelligent ones, are self-aware of not only there own actions, behaviors and feelings but also they are aware that of themselves as individuals i.e. apes, dolphins, elephants, etc. I don't think machines are anywhere close to this level yet.

Excellent dialog Phage. Thanks for your input.


“One of the saddest lessons of history is this: If we've been bamboozled long enough, we tend to reject any evidence of the bamboozle. We're no longer interested in finding out the truth. The bamboozle has captured us. It is simply too painful to acknowledge -- even to ourselves -- that we've been so credulous.” - Carl Sagan, The Fine Art of Baloney Detection

"You can't convince a believer of anything; for their belief is not based on evidence, it's based on a deep seated need to believe." - Carl Sagan

"It is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring." - Carl Sagan, The Demon-Haunted World


This message is a reply to:
 Message 3 by Phage0070, posted 04-15-2010 4:53 AM Phage0070 has responded

Replies to this message:
 Message 5 by Phage0070, posted 04-15-2010 6:29 AM DevilsAdvocate has responded

  
Phage0070
Inactive Member


Message 5 of 51 (555724)
04-15-2010 6:29 AM
Reply to: Message 4 by DevilsAdvocate
04-15-2010 6:08 AM


DevilsAdvocate writes:

I guess that is the key. What distinguishes sentience from non-sentience and is there a difference between real sentience and simulating sentience?

Thats like asking what the difference is between a duck, and a rock that is in all respects indistinguishable from a duck. I think the philosophical answer is that there isn't a difference at all.

DevilsAdvocate writes:

If the rules and processes are not there as they are in our brains to give us the capability to be self-aware aka sentient than no amount of processing power will automatically make them self-aware.

Right, but the question isn't about what magical rules and processes are required for sentience, it is about what rules and processes can *pass* for sentience. In that respect the determining factor (besides the tools themsevles) is the ability of the designer to fool their audience.

DevilsAdvocate writes:

Sentience is being self-aware of one's own existence, thoughts, feelings, etc.

A multitasking computer is aware of the multiple programs it is running and their output. It can dynamically adjust the time, if any, devoted to each task (thoughts) based both on current conditions and stored criteria. That criteria can be adjusted based on previous experiences, or communication with other entities (even other computers).

I'm not sure that self-awareness in such a sense can be compared to humans or animals directly. In one sense computers have precision in awareness of themselves far in excess of even humans. On the other hand humans would tend to describe them as "mindless" in that they cannot alter their programs except in the manner specified by their original creation.

I suppose this makes my definition of sentience hinge on the capacity to disobey.


This message is a reply to:
 Message 4 by DevilsAdvocate, posted 04-15-2010 6:08 AM DevilsAdvocate has responded

Replies to this message:
 Message 7 by DevilsAdvocate, posted 04-15-2010 8:21 AM Phage0070 has not yet responded

  
nwr
Member
Posts: 5586
From: Geneva, Illinois
Joined: 08-08-2005


Message 6 of 51 (555739)
04-15-2010 8:07 AM
Reply to: Message 1 by DevilsAdvocate
04-14-2010 10:22 PM


DevilsAdvocate writes:
..., do you think that machines well become self-aware (sentient) and if so how and when?

It won't happen in my lifetime, or in yours.

I think it would be a fair assessment of the current state of the art, to say that the experts haven't a clue how to do this.

So we now have a computer as top chess player. Yet, as almost anybody admits, the way it plays chess is not anything like the way that people play chess.

Back in 1950, when Turing wrote his famous article "Computing Machinery and Intelligence" it was thought that a real thinking computer was just around the corner. All we needed were computer a little faster and with a little more memory than were available.

Today people are saying that all we need are computers that are a little faster and with a little more memory.

In 100 years time, people will be saying that all we need are computers that are a little faster and with a little more memory. Or maybe it will eventually dawn on people, that the problem is not a lack of computing power.

Sorry to be a naysayer. I am just giving my honest assessment of the evidence of progress. And, honestly, we have not made any progress. The things that have been puzzles throughout 2,000 years of philosophy of mind are still puzzles today. All the computer has done, is provide us with a new metaphor in which to express those ancient puzzles.


This message is a reply to:
 Message 1 by DevilsAdvocate, posted 04-14-2010 10:22 PM DevilsAdvocate has responded

Replies to this message:
 Message 8 by DevilsAdvocate, posted 04-15-2010 8:42 AM nwr has responded

  
DevilsAdvocate
Member (Idle past 1412 days)
Posts: 1548
Joined: 06-05-2008


Message 7 of 51 (555742)
04-15-2010 8:21 AM
Reply to: Message 5 by Phage0070
04-15-2010 6:29 AM


Phage writes:

Me writes:

I guess that is the key. What distinguishes sentience from non-sentience and is there a difference between real sentience and simulating sentience?

Thats like asking what the difference is between a duck, and a rock that is in all respects indistinguishable from a duck. I think the philosophical answer is that there isn't a difference at all.

That makes sense. If you think about it the only difference between us and machines is the medium i.e. organic vs. inorganic material. The laws of physics and chemistry are the same. If the end result is the same i.e. self awareness, than you correct. Even humans themselves express different levels of awareness i.e. a baby and mentally challenged person are less self-aware than say a Buddhist Zen Master.

Right, but the question isn't about what magical rules and processes are required for sentience, it is about what rules and processes can *pass* for sentience.

True. I would venture to say that as self-aware as humans think they are, we are not self-aware 100% of the time. Sometimes we run on auto-pilot aka instinct during fight or flight survival events i.e. fighting in wars, running out of a burning building, etc. However, on average humans are more self-aware than any other living or non-living thing in the universe that we know of. I think that is the level we are talking about when we use the term "sentient".

I get where you are going with the "pass for sentience" as that is the key criteria for the Turing Test, which is our current baseline for determining machine sentience (can you tell the difference in a blind test if you are talking to a machine or a human). However, this can be deceptive because there are chatbots and other programs know that can fool humans at least some of the time but truly are not self-aware.

Also, another component to this is that humans can self-learn (teach themselves). Machines are at the cusp of this but have not really achieved the true capacity to teach themselves without any human intervention. I think this too plays into true sentience.

A multitasking computer is aware of the multiple programs it is running and their output. It can dynamically adjust the time, if any, devoted to each task (thoughts) based both on current conditions and stored criteria. That criteria can be adjusted based on previous experiences, or communication with other entities (even other computers).

Yes, but humans really do have limitless ability for self-correction and their ability to learn and self-teach whereas machines have not achieved this.

I'm not sure that self-awareness in such a sense can be compared to humans or animals directly. In one sense computers have precision in awareness of themselves far in excess of even humans. On the other hand humans would tend to describe them as "mindless" in that they cannot alter their programs except in the manner specified by their original creation.

Machines do not have the capacity as of yet to identify themselves as individual thinking machines, know that they are actually thinking, have the ability to teach themselves without limitations and understand their place and role in the world around them. Humans do have the capacity to do all the above.
I think that is the threshold we are trying to achieve.

Of course this begs the question, why do we want machines to achieve this and what are the implications both morally and scientifically of this accomplishment.

I suppose this makes my definition of sentience hinge on the capacity to disobey.

Agreed. Without the ability to know that you are an individual and capable of rational thought than you do not have the true capability or capacity to disobey or in more precise terms strive against the biological/physical rules that we are constrained by.

Edited by DevilsAdvocate, : No reason given.

Edited by DevilsAdvocate, : No reason given.


“One of the saddest lessons of history is this: If we've been bamboozled long enough, we tend to reject any evidence of the bamboozle. We're no longer interested in finding out the truth. The bamboozle has captured us. It is simply too painful to acknowledge -- even to ourselves -- that we've been so credulous.” - Carl Sagan, The Fine Art of Baloney Detection

"You can't convince a believer of anything; for their belief is not based on evidence, it's based on a deep seated need to believe." - Carl Sagan

"It is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring." - Carl Sagan, The Demon-Haunted World


This message is a reply to:
 Message 5 by Phage0070, posted 04-15-2010 6:29 AM Phage0070 has not yet responded

  
DevilsAdvocate
Member (Idle past 1412 days)
Posts: 1548
Joined: 06-05-2008


Message 8 of 51 (555745)
04-15-2010 8:42 AM
Reply to: Message 6 by nwr
04-15-2010 8:07 AM


It won't happen in my lifetime, or in yours.

Some humans also said we would never fly, go into outer space, etc, etc. But we have exceeded even our wildest expectations.

I believe anything that we can imagine is most likely possible, the question really is, how probable is it given our current status as the human race and when. Of course alot of this has to do with the time, money and effort we put into these ventures. If we wanted to, as humans, we could already be living on Mars. The problem is that we sqwabble and fight to much as groups of humans instead of joining together to achieve loftier goals.

So I don't think the problem is that it is a physical impossibility as much as it is a matter of, is this something we want to do as a human species and utilizing how much of our resources?

In 100 years time, people will be saying that all we need are computers that are a little faster and with a little more memory. Or maybe it will eventually dawn on people, that the problem is not a lack of computing power.

I agree it is not just about computing power or memory as I expressed in my last post. I think we have to tackle the fundamentals of biological thinking. By emulating the human brain I think we can emulate self-awareness i.e. sentience. IBM is actually working on a project called "Blue Brain" in which they are creating a synthetic "brain" by reverse engineering a biological brain down to the molecular level and recreating a digital equivalent of this brain using supercomputers simulating biologically realistic models of neurons Blue Brain Prjoect.

Sorry to be a naysayer. I am just giving my honest assessment of the evidence of progress. And, honestly, we have not made any progress. The things that have been puzzles throughout 2,000 years of philosophy of mind are still puzzles today. All the computer has done, is provide us with a new metaphor in which to express those ancient puzzles.

Never say never.


“One of the saddest lessons of history is this: If we've been bamboozled long enough, we tend to reject any evidence of the bamboozle. We're no longer interested in finding out the truth. The bamboozle has captured us. It is simply too painful to acknowledge -- even to ourselves -- that we've been so credulous.” - Carl Sagan, The Fine Art of Baloney Detection

"You can't convince a believer of anything; for their belief is not based on evidence, it's based on a deep seated need to believe." - Carl Sagan

"It is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring." - Carl Sagan, The Demon-Haunted World


This message is a reply to:
 Message 6 by nwr, posted 04-15-2010 8:07 AM nwr has responded

Replies to this message:
 Message 10 by Taq, posted 04-15-2010 9:24 AM DevilsAdvocate has responded
 Message 16 by nwr, posted 04-15-2010 12:06 PM DevilsAdvocate has responded

  
caffeine
Member
Posts: 1709
From: Prague, Czech Republic
Joined: 10-22-2008
Member Rating: 2.7


Message 9 of 51 (555755)
04-15-2010 9:23 AM


What causes sentience?
Everybody seems to be getting a bit ahead of themselves here. Before we can even begin to think about answering the question of what it would take for machines to be sentient, we'd have to have some idea of what it takes for us to be sentient. You can reply that it's the connections between neurons in the brain, but this doesn't at all address how this creates sentience.

Bluejay mentioned the ability to disobey as a criterion of sentience, but how do we even know that we have the ability to disobey? I don't mean in the simplistic sense of disobeying an instruction from somebody - computers can do that ('Illegal operation'; 'file not found' etc.). The implication seemed to be about disobeying your own programming, and I don't see any reason to assume that people can do that. How could we distinguish between someone disobeying the deterministic processes which determine how their brain works and somebody obeying them?

DevilsAdvocate talked about 'striving against biological and physical rules' but, again, how could we know if this is ever done? The human brain is an incredibly complex piece of work, and somebody acting contrary to their reproductive success (which is, I assume, the sort of thing you mean) doesn't mean they're acting against any biological or physical rules. It could just mean that, despite being like it is because brains like this have tended to favour reproductive success in the past, there's no guarantee that it always will in all circumstances.

Nobody has any clear idea how sentience is created, so we can't know what it would take to create it, is all I suppose I'm trying to say.


Replies to this message:
 Message 14 by DevilsAdvocate, posted 04-15-2010 10:53 AM caffeine has responded
 Message 15 by Jazzns, posted 04-15-2010 11:33 AM caffeine has responded

  
Taq
Member
Posts: 8159
Joined: 03-06-2009
Member Rating: 4.2


Message 10 of 51 (555756)
04-15-2010 9:24 AM
Reply to: Message 8 by DevilsAdvocate
04-15-2010 8:42 AM


I agree it is not just about computing power or memory as I expressed in my last post. I think we have to tackle the fundamentals of biological thinking.

That's my gut instinct as well. It's not the number of transistors that is the problem. It is the transistors themselves. Making an AI will require a whole new technology and a whole new way of looking at computing, IMHO.

The first computers were designed to do fast calculations and computer speed has been judged by that standard ever since. We have just made faster and faster calculators. The human brain is not a calculator. The human brain is a master of association, of matching patterns to patterns, voices to voices, etc. The human brain is relatively poor at doing calculations, but it can do pattern recognition better and faster than any computer.


This message is a reply to:
 Message 8 by DevilsAdvocate, posted 04-15-2010 8:42 AM DevilsAdvocate has responded

Replies to this message:
 Message 11 by Dr Jack, posted 04-15-2010 10:19 AM Taq has not yet responded
 Message 13 by DevilsAdvocate, posted 04-15-2010 10:45 AM Taq has responded

  
Dr Jack
Member (Idle past 416 days)
Posts: 3507
From: Leicester, England
Joined: 07-14-2003


Message 11 of 51 (555766)
04-15-2010 10:19 AM
Reply to: Message 10 by Taq
04-15-2010 9:24 AM


The human brain is not a calculator. The human brain is a master of association, of matching patterns to patterns, voices to voices, etc. The human brain is relatively poor at doing calculations, but it can do pattern recognition better and faster than any computer

Those things are calculations; we just don't access them at that level. By analogy, try calculating logarithms by playing Quake.


This message is a reply to:
 Message 10 by Taq, posted 04-15-2010 9:24 AM Taq has not yet responded

  
CosmicChimp
Member
Posts: 306
From: Muenchen Bayern Deutschland
Joined: 06-15-2007


Message 12 of 51 (555769)
04-15-2010 10:37 AM
Reply to: Message 1 by DevilsAdvocate
04-14-2010 10:22 PM


Greetings DevilsAdvocate et al.,

My question is with the rapid miniaturization of computer chips, increase in processing power and memory size, and introduction of novel innovations such as neural networks, nanotechnology and quantum computing, do you think that machines well become self-aware (sentient) and if so how and when?
Yes I do see it as happening, even inevitable. I think within fifty to one hundred years time artificial sentience will have been achieved. I think the breakthrough will come through recursive simulation of a very highly resolved complex system, that takes into account all of the possible combinations of the subunits, then sorting against a criteria that improves the simulation. Similar to mammal brain evolution but getting every single possible brain every generation.

This message is a reply to:
 Message 1 by DevilsAdvocate, posted 04-14-2010 10:22 PM DevilsAdvocate has not yet responded

Replies to this message:
 Message 19 by New Cat's Eye, posted 04-15-2010 12:27 PM CosmicChimp has responded

  
DevilsAdvocate
Member (Idle past 1412 days)
Posts: 1548
Joined: 06-05-2008


Message 13 of 51 (555771)
04-15-2010 10:45 AM
Reply to: Message 10 by Taq
04-15-2010 9:24 AM


That's my gut instinct as well. It's not the number of transistors that is the problem. It is the transistors themselves. Making an AI will require a whole new technology and a whole new way of looking at computing, IMHO.

The first computers were designed to do fast calculations and computer speed has been judged by that standard ever since. We have just made faster and faster calculators. The human brain is not a calculator. The human brain is a master of association, of matching patterns to patterns, voices to voices, etc. The human brain is relatively poor at doing calculations, but it can do pattern recognition better and faster than any computer.

Agreed. I think to break it down in simplistic terms, what is unique with biological computers aka brains is their ability to parallel process on a much more massive scale than what is possible with today’s computers. We are talking about tens of billions of neurons within the human brain each with several thousand synapses (biochemical connections - equivalent to electronic switches or transistors) firing near simultaneously (yes I know they do not all fire at exactly the same time). That is a massive amount of computational power which allows very complex operations such as pattern recognition, symbolic language development and human self-awareness to occur. There is nothing we have created that compares to this type of setup.

Until we reach this threadshold the possibility for human like sentience (self-awareness) is not achievable. However, like I discussed previously, animal like sentience (insect level) may be possible. That is machines have the capability to independently interact with their surroundings but truly not understand on a human level what they are interacting with.

Edited by DevilsAdvocate, : No reason given.

Edited by DevilsAdvocate, : No reason given.


“One of the saddest lessons of history is this: If we've been bamboozled long enough, we tend to reject any evidence of the bamboozle. We're no longer interested in finding out the truth. The bamboozle has captured us. It is simply too painful to acknowledge -- even to ourselves -- that we've been so credulous.” - Carl Sagan, The Fine Art of Baloney Detection

"You can't convince a believer of anything; for their belief is not based on evidence, it's based on a deep seated need to believe." - Carl Sagan

"It is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring." - Carl Sagan, The Demon-Haunted World


This message is a reply to:
 Message 10 by Taq, posted 04-15-2010 9:24 AM Taq has responded

Replies to this message:
 Message 17 by Taq, posted 04-15-2010 12:15 PM DevilsAdvocate has not yet responded

  
DevilsAdvocate
Member (Idle past 1412 days)
Posts: 1548
Joined: 06-05-2008


Message 14 of 51 (555776)
04-15-2010 10:53 AM
Reply to: Message 9 by caffeine
04-15-2010 9:23 AM


Re: What causes sentience?
Nobody has any clear idea how sentience is created, so we can't know what it would take to create it, is all I suppose I'm trying to say.

By sentience, I think we can all agree we are talking human-like sentience. The ability to contemplate one's self and the ability to increase their knowledge base both on an individual level and collectively. Culture (accumulation of moral and social norms) and science (expounded accumulation of knowledge of ourselves and the universe around us) are only achievable at this level of sentience.


“One of the saddest lessons of history is this: If we've been bamboozled long enough, we tend to reject any evidence of the bamboozle. We're no longer interested in finding out the truth. The bamboozle has captured us. It is simply too painful to acknowledge -- even to ourselves -- that we've been so credulous.” - Carl Sagan, The Fine Art of Baloney Detection

"You can't convince a believer of anything; for their belief is not based on evidence, it's based on a deep seated need to believe." - Carl Sagan

"It is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring." - Carl Sagan, The Demon-Haunted World


This message is a reply to:
 Message 9 by caffeine, posted 04-15-2010 9:23 AM caffeine has responded

Replies to this message:
 Message 18 by caffeine, posted 04-15-2010 12:23 PM DevilsAdvocate has responded
 Message 33 by Rahvin, posted 04-15-2010 6:02 PM DevilsAdvocate has responded

  
Jazzns
Member (Idle past 2222 days)
Posts: 2657
From: A Better America
Joined: 07-23-2004


Message 15 of 51 (555785)
04-15-2010 11:33 AM
Reply to: Message 9 by caffeine
04-15-2010 9:23 AM


Re: What causes sentience?
Bluejay mentioned the ability to disobey as a criterion of sentience, but how do we even know that we have the ability to disobey? I don't mean in the simplistic sense of disobeying an instruction from somebody - computers can do that ('Illegal operation'; 'file not found' etc.). The implication seemed to be about disobeying your own programming, and I don't see any reason to assume that people can do that. How could we distinguish between someone disobeying the deterministic processes which determine how their brain works and somebody obeying them?

I think we need better definitions of obey and disobey to go down this path. Your examples are flawed in the sense that they define 'disobey' as being materially incapable of performing the command. When a computer tells you 'Illegal Operation' or something equivalent, it is very simply a deterministic result of being physically incapable of doing what you told it. There is very literally no possible way the electrons can flow down the wires of the integrated circuit in the exact pattern you specified.

An analogous equivalent for a human would be the the refusal to detect the smell of song. The physical reality of our universe does not permit that.

A better definition might be the choice or refusal to do something based on self-interest. This sort of kicks the can down the road a bit because now you have to define self-interest but it is scope reduction of the problem.


If a nation expects to be ignorant and free, in a state of civilization, it expects what never was and never will be. --Thomas Jefferson

This message is a reply to:
 Message 9 by caffeine, posted 04-15-2010 9:23 AM caffeine has responded

Replies to this message:
 Message 20 by caffeine, posted 04-15-2010 12:30 PM Jazzns has not yet responded

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2018 by EvC Forum, All Rights Reserved

™ Version 4.0 Beta
Innovative software from Qwixotic © 2019