Register | Sign In


Understanding through Discussion


EvC Forum active members: 63 (9162 total)
3 online now:
Newest Member: popoi
Post Volume: Total: 916,331 Year: 3,588/9,624 Month: 459/974 Week: 72/276 Day: 0/23 Hour: 0/1


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   The Future of Artificial Intelligence: Can machines become sentient (self-aware)
CosmicChimp
Member
Posts: 311
From: Muenchen Bayern Deutschland
Joined: 06-15-2007


Message 31 of 51 (555857)
04-15-2010 5:10 PM
Reply to: Message 26 by New Cat's Eye
04-15-2010 2:27 PM


What makes a complex system 'very highly resolved'? What are the subunits? How are they 'combined'? Where does the criteria for improvement come from?
How's that work? Do they "layer" the simulations or are they more like "side by side"?
With highly resolved I mean that the system has to be based upon a multitude of simple building blocks. The simpler the better. They have to model faithfully their role in nature. The question is I think still unanswered as to how far down the scale of size it is necessary to model. I believe sentience is an emergent behavior, and therefore exact modeling on the smallest scales will bring forth the higher levels of complexity. Maybe you have to show the individual atoms; neurons and their connections certainly.
With humanity many generations of interacting brains have created the intelligence, therefore why not model a multitude of brains? At the end of the day, is it not merely a continuation of the same recursive process, human sentience and AI?
Edited by CosmicChimp, : dingsda

This message is a reply to:
 Message 26 by New Cat's Eye, posted 04-15-2010 2:27 PM New Cat's Eye has not replied

Replies to this message:
 Message 42 by Dr Jack, posted 04-16-2010 10:13 AM CosmicChimp has replied

  
nwr
Member
Posts: 6409
From: Geneva, Illinois
Joined: 08-08-2005
Member Rating: 5.3


Message 32 of 51 (555859)
04-15-2010 5:45 PM
Reply to: Message 30 by DevilsAdvocate
04-15-2010 4:33 PM


Re: What causes sentience?
DevilsAdvocate writes:
Aka 'culture' though some higher intelligent animals have some rudimentary forms of this. Basically accumulated extrasomatic knowledge passed down from generation to generation.
Yes, quite right. A lot of what we see as human progress is the result of people working together cooperatively to achieve what could not be done individually. And that is what culture does for us.

This message is a reply to:
 Message 30 by DevilsAdvocate, posted 04-15-2010 4:33 PM DevilsAdvocate has not replied

  
Rahvin
Member
Posts: 4039
Joined: 07-01-2005
Member Rating: 8.0


Message 33 of 51 (555860)
04-15-2010 6:02 PM
Reply to: Message 14 by DevilsAdvocate
04-15-2010 10:53 AM


Re: What causes sentience?
By sentience, I think we can all agree we are talking human-like sentience. The ability to contemplate one's self and the ability to increase their knowledge base both on an individual level and collectively. Culture (accumulation of moral and social norms) and science (expounded accumulation of knowledge of ourselves and the universe around us) are only achievable at this level of sentience.
To what degree must it be "human-like," though?
Is the mere capacity for abstract thought and the comprehension of "self" as an entity distinct from surroundings enough?
Because a fully artificial intelligence need not bear much semblance to a human mind - and in fact it would seem silly to do so on purpose, considering the shortcomings of the human mind that would nto eb significantly affected by simply giving it perfect recall and faster processing ability.
Check out this thread. It was started by a person who is actually the technical director of a company working on real, general AI research.
To those who say we haven't made progress: you're wrong. Utterly wrong. Artificial intelligence research was stagnant for years, but has made giant leaps in recent years as commercial businesses realized the power of non-general AI.
To those who argue about processing power: it's true that the microprocessor in your PC has a fraction of teh processing power of the human brain...except the processor in your PC is also roughly the size of the fingernail on my pinky finger, and the human brain has a bit more volume. Even if we're talking about multiple orders of magnitude in difference, there's no reason an artificial intelligence needs to be confined to the size of a human brain. If it takes the volume of an entire data center, that's fine, because there's no skull or genetic code limiting it.
Processing power hasn't been the limiting factor with regard to AI for some time now. The difficulty is the approach.
One method involves attempting to copy a human brain. The idea is that you simply simulate neurons in a virtual environment in a configuration identical to a human brain, and voila! You should have a self-aware human mind in an artificial construct.
The problems with this approach are manifold. First, you are using a lot of overhead simulating all of those neurons. That's a lot of processing power. Second, you're not solving any of the shortcomings of the human mind - you haven't altered the architecture or the flawed ways we think even one bit. Third, you don;t have any idea what you've done - this approach is like a small child copying letters from a book but unable to read and understand the words being copied.
Another method involves a "seed AI," something like a computerized child that learns as you go along through rewriting its own code and evolutionary algorithms. The idea is that sapience will be emergent as the capabilities of the AI increase. This bears many of the same problems as the previous example, in that we would have little idea what's going on in our artificial mind.
Other approaches involve designing an intelligent system from teh ground up. This at least involves an open architecture, where we would understand what's going on in the mind at any point.
But an artificial intelligence doesn't need to be very similar to a human mind when you make it from scratch. At its core, sentience is simply the capacity for abstract thought. But that doesn't tell us what the goals of an artifical intelligence would be, or its thought processes.
An AI need not include "self-preservation" as a goal. An AI need not include "human preservation" as a goal, either. Any realistic open AI (meaning not a simulated human brain, etc) will of necessity include the ability to rewrite its own coding. Can you imagine the different thought patterns of a mind capable of literally changing in moments the way it thinks? Imagine being able to analyze your own thought processes, recognize faulty pathways, and immediately rewrite them in accordance with your own goals?
What if the AI rewrites its own basic goals?
This is the power of artificial intelligence - a mind capable of correcting its own flaws and self-improvement in a way human minds cannot, coupled with hardware that is not limited by biology. Scalable processing and memory ability; immortality, so long as sufficient infrastructure remains to replace parts and supply power; perfect communication between AIs by not simply transmitting words, but copying the ideas being shared directly; perfect adherence to goals without concerns like boredom or frustration; no ties to biological urges that waste processing resources.
Such an intelligence would be alien to us...and capable of far more than we are intellectually. We might not even identify it as "sentient," simply because of how different it would be from a human mind.
I don't think there's a way to guesstimate a timeframe for when a true general AI will be developed. But I do think that it's an inevitability. Our own sapience proves that the concept is sound (obviously self-aware, abstract-reasoning cognitive engines can exist, since they do). Modern computer science has demonstrated quite clearly that processing power and memory, when they are a problem, do not remain so for long.

This message is a reply to:
 Message 14 by DevilsAdvocate, posted 04-15-2010 10:53 AM DevilsAdvocate has replied

Replies to this message:
 Message 34 by caffeine, posted 04-16-2010 3:56 AM Rahvin has replied
 Message 39 by DevilsAdvocate, posted 04-16-2010 5:48 AM Rahvin has replied

  
caffeine
Member (Idle past 1042 days)
Posts: 1800
From: Prague, Czech Republic
Joined: 10-22-2008


Message 34 of 51 (555890)
04-16-2010 3:56 AM
Reply to: Message 33 by Rahvin
04-15-2010 6:02 PM


Re: What causes sentience?
The problems with this approach are manifold. First, you are using a lot of overhead simulating all of those neurons. That's a lot of processing power. Second, you're not solving any of the shortcomings of the human mind - you haven't altered the architecture or the flawed ways we think even one bit. Third, you don;t have any idea what you've done - this approach is like a small child copying letters from a book but unable to read and understand the words being copied.
If this could be done though, at the very least it would demonstrate that conciousness is an emergent property of the brain, which might annoy a lot of philosophers and theologians.
In the absence of anything more constructive to offer, I think it says a lot about our cultural upbringing that, when reading about AI rewriting its own goals, all I could think about was Skynet and visions of dark, bleak, post-apocalyptic landscapes.

This message is a reply to:
 Message 33 by Rahvin, posted 04-15-2010 6:02 PM Rahvin has replied

Replies to this message:
 Message 38 by DevilsAdvocate, posted 04-16-2010 5:41 AM caffeine has not replied
 Message 43 by Rahvin, posted 04-16-2010 11:28 AM caffeine has not replied

  
slevesque
Member (Idle past 4658 days)
Posts: 1456
Joined: 05-14-2009


Message 35 of 51 (555892)
04-16-2010 4:11 AM
Reply to: Message 21 by DevilsAdvocate
04-15-2010 12:30 PM


In the christian worldview humans have souls and are made in God's image. So I don't think these AI's would fit these criteria's and therefore require to be ''saved''
(Although some extremist christian down in texas may believe otherwise and go on a crusade against computer scientists I guess ...)

This message is a reply to:
 Message 21 by DevilsAdvocate, posted 04-15-2010 12:30 PM DevilsAdvocate has replied

Replies to this message:
 Message 37 by DevilsAdvocate, posted 04-16-2010 5:33 AM slevesque has not replied

  
Dr Jack
Member
Posts: 3514
From: Immigrant in the land of Deutsch
Joined: 07-14-2003
Member Rating: 8.3


Message 36 of 51 (555899)
04-16-2010 4:44 AM
Reply to: Message 27 by Taq
04-15-2010 3:08 PM


The part that interests me is the ability to rewire the processor on the go. This is something the brain does as well. As far as I know the CPU in your standard home PC does not do this.
No, it doesn't but that doesn't let you achieve a single thing that not doing it doesn't. Programming is very powerful. If nothing else you can exactly simulate your rewiring of the processor on a normal chip.

This message is a reply to:
 Message 27 by Taq, posted 04-15-2010 3:08 PM Taq has replied

Replies to this message:
 Message 40 by Taq, posted 04-16-2010 9:24 AM Dr Jack has replied

  
DevilsAdvocate
Member (Idle past 3119 days)
Posts: 1548
Joined: 06-05-2008


Message 37 of 51 (555903)
04-16-2010 5:33 AM
Reply to: Message 35 by slevesque
04-16-2010 4:11 AM


In the christian worldview humans have souls and are made in God's image. So I don't think these AI's would fit these criteria's and therefore require to be ''saved''
(Although some extremist christian down in texas may believe otherwise and go on a crusade against computer scientists I guess ...)
I guess you can make the religious argument that machines can't have an afterlife (life after death) but they can potentially live forever (uploading their mind to new 'bodies' as their old ones where out). I think then the concept of a soul becomes a moot point does it not?

One of the saddest lessons of history is this: If we've been bamboozled long enough, we tend to reject any evidence of the bamboozle. We're no longer interested in finding out the truth. The bamboozle has captured us. It is simply too painful to acknowledge -- even to ourselves -- that we've been so credulous. - Carl Sagan, The Fine Art of Baloney Detection
"You can't convince a believer of anything; for their belief is not based on evidence, it's based on a deep seated need to believe." - Carl Sagan
"It is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring." - Carl Sagan, The Demon-Haunted World

This message is a reply to:
 Message 35 by slevesque, posted 04-16-2010 4:11 AM slevesque has not replied

  
DevilsAdvocate
Member (Idle past 3119 days)
Posts: 1548
Joined: 06-05-2008


Message 38 of 51 (555905)
04-16-2010 5:41 AM
Reply to: Message 34 by caffeine
04-16-2010 3:56 AM


Re: What causes sentience?
In the absence of anything more constructive to offer, I think it says a lot about our cultural upbringing that, when reading about AI rewriting its own goals, all I could think about was Skynet and visions of dark, bleak, post-apocalyptic landscapes.
Another set of sci-fi books that outline the potential dangers of emergent singularity in AI is Brian Herbert and Kevin Anderson's Legends of Dune series which describe cyborgs, AI thinking robots and a super intelligent AI computer program overlord called 'Omnius' that take over the known universe, subjugate humans as slaves and isolate any free humans to a small sector of planets who attempt to overthrow this AI empire. It really is a fascinating read.

One of the saddest lessons of history is this: If we've been bamboozled long enough, we tend to reject any evidence of the bamboozle. We're no longer interested in finding out the truth. The bamboozle has captured us. It is simply too painful to acknowledge -- even to ourselves -- that we've been so credulous. - Carl Sagan, The Fine Art of Baloney Detection
"You can't convince a believer of anything; for their belief is not based on evidence, it's based on a deep seated need to believe." - Carl Sagan
"It is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring." - Carl Sagan, The Demon-Haunted World

This message is a reply to:
 Message 34 by caffeine, posted 04-16-2010 3:56 AM caffeine has not replied

  
DevilsAdvocate
Member (Idle past 3119 days)
Posts: 1548
Joined: 06-05-2008


Message 39 of 51 (555907)
04-16-2010 5:48 AM
Reply to: Message 33 by Rahvin
04-15-2010 6:02 PM


Re: What causes sentience?
This is the power of artificial intelligence - a mind capable of correcting its own flaws and self-improvement in a way human minds cannot, coupled with hardware that is not limited by biology. Scalable processing and memory ability; immortality, so long as sufficient infrastructure remains to replace parts and supply power; perfect communication between AIs by not simply transmitting words, but copying the ideas being shared directly; perfect adherence to goals without concerns like boredom or frustration; no ties to biological urges that waste processing resources.
Such an intelligence would be alien to us...and capable of far more than we are intellectually. We might not even identify it as "sentient," simply because of how different it would be from a human mind.
Read the Legends of Dune and it will describe this in a fictional futeristic setting.
The real question is: Do we really want to open Pandora's Box by doing so? Are we not dooming the future of the human species by making a thinking machine that can rewrite its own programming andthus making ethical rules such as Asimov's 'Laws of Robotics' (which are good primer for weaker AI systems to follow) powerless?

One of the saddest lessons of history is this: If we've been bamboozled long enough, we tend to reject any evidence of the bamboozle. We're no longer interested in finding out the truth. The bamboozle has captured us. It is simply too painful to acknowledge -- even to ourselves -- that we've been so credulous. - Carl Sagan, The Fine Art of Baloney Detection
"You can't convince a believer of anything; for their belief is not based on evidence, it's based on a deep seated need to believe." - Carl Sagan
"It is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring." - Carl Sagan, The Demon-Haunted World

This message is a reply to:
 Message 33 by Rahvin, posted 04-15-2010 6:02 PM Rahvin has replied

Replies to this message:
 Message 44 by Rahvin, posted 04-16-2010 11:55 AM DevilsAdvocate has replied

  
Taq
Member
Posts: 10021
Joined: 03-06-2009
Member Rating: 5.3


Message 40 of 51 (555925)
04-16-2010 9:24 AM
Reply to: Message 36 by Dr Jack
04-16-2010 4:44 AM


No, it doesn't but that doesn't let you achieve a single thing that not doing it doesn't.
You can't let your quad-core AMD Athlon CPU evolve function like you can with a gated processor. In the example I cited earlier there were small circuits that were not connected to the rest of the main circuit. When these small circuits were removed the function ceased. If the small circuits are producing electromagnetic effects as the scientists in the study suspect then I would also suspect that these circuits are somewhat individualized. No two chips will function the same being that there will be small differences in electromagnetic effects with each processor. Now I could be completely wrong here. Biology is my expertise, not electrical engineering.
Programming is very powerful.
In biology, the hardware is the software. Perhaps this is another feature that future AI will need.
I have always viewed computer software as an abomination. Well, maybe not that strong of a word, but something along those lines. Computer programming has always seemed like trying to make rocks fly like birds. The way in which computers work is very different from how brains work, and yet we use programming to force computers to act in a way that our brains can understand. IMVH(and admittedly poorly informed)O, the next great step in computing will be towards computers that need a lot less programming because the architecture is more like us. Just a thought.

This message is a reply to:
 Message 36 by Dr Jack, posted 04-16-2010 4:44 AM Dr Jack has replied

Replies to this message:
 Message 41 by Dr Jack, posted 04-16-2010 10:09 AM Taq has not replied

  
Dr Jack
Member
Posts: 3514
From: Immigrant in the land of Deutsch
Joined: 07-14-2003
Member Rating: 8.3


Message 41 of 51 (555933)
04-16-2010 10:09 AM
Reply to: Message 40 by Taq
04-16-2010 9:24 AM


You can't let your quad-core AMD Athlon CPU evolve function like you can with a gated processor.
The CPU itself can't evolve, but you can write software on it that will achieve everything that evolving the CPU can. Everything.

This message is a reply to:
 Message 40 by Taq, posted 04-16-2010 9:24 AM Taq has not replied

  
Dr Jack
Member
Posts: 3514
From: Immigrant in the land of Deutsch
Joined: 07-14-2003
Member Rating: 8.3


Message 42 of 51 (555936)
04-16-2010 10:13 AM
Reply to: Message 31 by CosmicChimp
04-15-2010 5:10 PM


With highly resolved I mean that the system has to be based upon a multitude of simple building blocks. The simpler the better. They have to model faithfully their role in nature. The question is I think still unanswered as to how far down the scale of size it is necessary to model. I believe sentience is an emergent behavior, and therefore exact modeling on the smallest scales will bring forth the higher levels of complexity. Maybe you have to show the individual atoms; neurons and their connections certainly.
This strikes me as little more than mysticism. Why should we believe that the particular structures of the brain are required for sentience? It seems to me the only reason for believing so is the idea that some "special woo" happens in the brain that magics up sentience - and that goes against everything we know about the brain.
Neurons and connections, btw, are definetely not required because we know, for a fact, that neural networks can achieve exactly nothing that conventional hardware can't.

This message is a reply to:
 Message 31 by CosmicChimp, posted 04-15-2010 5:10 PM CosmicChimp has replied

Replies to this message:
 Message 48 by CosmicChimp, posted 04-16-2010 3:51 PM Dr Jack has replied

  
Rahvin
Member
Posts: 4039
Joined: 07-01-2005
Member Rating: 8.0


Message 43 of 51 (555944)
04-16-2010 11:28 AM
Reply to: Message 34 by caffeine
04-16-2010 3:56 AM


Re: What causes sentience?
If this could be done though, at the very least it would demonstrate that conciousness is an emergent property of the brain, which might annoy a lot of philosophers and theologians.
In the absence of anything more constructive to offer, I think it says a lot about our cultural upbringing that, when reading about AI rewriting its own goals, all I could think about was Skynet and visions of dark, bleak, post-apocalyptic landscapes.
See, what I picture is a post-human breakthrough that could potentially usher in a new era of scientific advancement and improve the quality of life for all human beings everywhere...if it's done right.
I also see the potential for real immortality through human brain uploads, though that's a bit more in the fiction side of science fiction.
But can you imagine the potential offered by a true Friendly AGI (artificial general intelligence)? Space exploration would become much less complicated without the need for life support. Upload an AI (possibly even a human upload) into a ship's computer and go. Radiation shielding is easier. Volume constraints are easier. You no longer need to carry tons of food, water, and air. Acceleration stresses are no longer an issue. Remote control of rovers and scientific instruments can be done directly by the artificial consciousness, rather than using clumsy manual controls. Distance is no longer an issue - an AGI can enter standby mode for centuries if need be to make a trip that would otherwise require humans in cryostasis (requiring still more technology and the requisite volume and mass for the equipment) or a massive generation ship.
Most of those benefits apply to deep sea exploration as well.
While recursive self-modification of programming and hardware opens us up to more realistic versions of AI nightmare movies (terminators are highly inefficient, and Skynet launching nukes just takes out its own support infrastructure in terms of power generation and parts manufacture, but a non-friendly AGI would still be bad news for us), it also creates the possibility of what's been termed the "singularity:" a leap that changes everything, because it surpasses the limits of human intelligence.

This message is a reply to:
 Message 34 by caffeine, posted 04-16-2010 3:56 AM caffeine has not replied

Replies to this message:
 Message 45 by DevilsAdvocate, posted 04-16-2010 2:35 PM Rahvin has replied

  
Rahvin
Member
Posts: 4039
Joined: 07-01-2005
Member Rating: 8.0


Message 44 of 51 (555945)
04-16-2010 11:55 AM
Reply to: Message 39 by DevilsAdvocate
04-16-2010 5:48 AM


Re: What causes sentience?
The real question is: Do we really want to open Pandora's Box by doing so? Are we not dooming the future of the human species by making a thinking machine that can rewrite its own programming andthus making ethical rules such as Asimov's 'Laws of Robotics' (which are good primer for weaker AI systems to follow) powerless?
The real question is "can we stop it?"
I think the answer is likely to be "no," and so there is an impetus to develop a truly friendly AGI before someone does it wrong.
Remember, all AI systems are goal engines - they constantly evaluate every potential action against their current goals. Asimov was just scratching the surface - you can give them whatever goals you want, but in order to take their own initiatives and be a real artificial general intelligence, they'll need to be able to create and modify their own goals.
You might be able to "hard-code" something like the Three Laws or Robocop's Prime Directives as "super-goals" that cannot be altered, but if we're talking about a sentient intellect, what if it develops the desire to change those goals? It's going to be smarter than you, and far faster in changing its own code than you can possibly respond - and it might even be able to make it look like nothing has been changed. Can you stop it? I don't know if it's possible - I suppose we'll find out when we finally get an AGI up and running.

This message is a reply to:
 Message 39 by DevilsAdvocate, posted 04-16-2010 5:48 AM DevilsAdvocate has replied

Replies to this message:
 Message 46 by DevilsAdvocate, posted 04-16-2010 2:57 PM Rahvin has not replied

  
DevilsAdvocate
Member (Idle past 3119 days)
Posts: 1548
Joined: 06-05-2008


Message 45 of 51 (555960)
04-16-2010 2:35 PM
Reply to: Message 43 by Rahvin
04-16-2010 11:28 AM


Re: What causes sentience?
Rahvin writes:
I also see the potential for real immortality through human brain uploads, though that's a bit more in the fiction side of science fiction.
I have heard about this proposition as well. The problem is that our conscience 'mind' is tied to our physical brain. The uploaded "you" and your current self are not the same. You would essentially die and a new version of you (a copy of your memories, etc) would continue to exist. This is the same problem with cloning yourself or lobotomizing your brain into two hemispheres and placing the other half into a viable body. Either way, your own conscience would cease to exist once you die but a copy of you would live on. This is really not solving the immortality problem.
Edited by DevilsAdvocate, : No reason given.

One of the saddest lessons of history is this: If we've been bamboozled long enough, we tend to reject any evidence of the bamboozle. We're no longer interested in finding out the truth. The bamboozle has captured us. It is simply too painful to acknowledge -- even to ourselves -- that we've been so credulous. - Carl Sagan, The Fine Art of Baloney Detection
"You can't convince a believer of anything; for their belief is not based on evidence, it's based on a deep seated need to believe." - Carl Sagan
"It is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring." - Carl Sagan, The Demon-Haunted World

This message is a reply to:
 Message 43 by Rahvin, posted 04-16-2010 11:28 AM Rahvin has replied

Replies to this message:
 Message 47 by Rahvin, posted 04-16-2010 3:01 PM DevilsAdvocate has replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024