Register | Sign In


Understanding through Discussion


EvC Forum active members: 64 (9163 total)
7 online now:
Newest Member: ChatGPT
Post Volume: Total: 916,417 Year: 3,674/9,624 Month: 545/974 Week: 158/276 Day: 32/23 Hour: 2/3


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   The Social Implications Of "The Singularity Moment"
Rahvin
Member
Posts: 4039
Joined: 07-01-2005
Member Rating: 8.2


(1)
Message 58 of 169 (604715)
02-14-2011 1:44 PM
Reply to: Message 1 by Phat
02-12-2011 10:39 AM


Recently, I read an article in Time Magazine titled 2045: The Year Man Becomes Immortal In it, they discuss the rapid advance in artificial intelligence, and have popularized the phrase "The singularity" as the moment when computers become capable themselves of designing more intelligent computers ad infinitum.
Time Magazine did not coin the term "Singularity" as it pertains to AI.
So-called "post-humanism" has driven a large amount of fiction as authors speculate as to the social impact of the introduction of a Artificial General Intelligence (the term "general" is important; artificially intelligent programs already exist, but are very specific in their application, not adaptable to multiple or unforeseen situations). We're all familiar with the Terminator-style apocalyptic visions of the effects of AGI, for example.
As to whether a Singularity event will happen at all...it seems inevitable that AGI will eventually be developed. After all, we know that a general intelligence is possible - every human brain is an example of an adaptable intellect. It seems logically foolish to claim that while a natural general intelligence is possible, it is impossible for humans to artificially create one.
It's far, far easier to modify a computer program or computer hardware than it is to modify the living brain of a human being.
It's far, far easier to keep track of what specifically is going on inside of a computer program, what task is currently being processed, and how specifically it works, than it is to do the same with a living human brain.
Computer processing happens far faster than human thought, and further is not limited to a specific amount of space. A human brain needs to fit within the confines of a human skull, and has to work with the finite energy and nutritional resources of a human body. Electronic computers are far more easily up-scaled, adding additional processors or memory or storage, etc.
Can you imagine, then, an artificial intelligence that is capable of altering itself? Of analyzing its own thought processes, assessing its own performance pursuant to its goals, and modifying its core programming to be more efficient? Human beings are flawed intelligences, with cognitive faults like confirmation bias; imagine being able to actually, literally, reprogram those faults out of your thought process. An AGI would likely be able to do that, and depending on the hardware allocated and processing requirements for running the program, might well be able to do it faster than we can track.
Some of the implications of AGI are difficult to predict; others are easy.
Some potential pathways to AGI involve trying to duplicate a human brain in a computer environment; one proposed technique involves attempting to simulate every neuron in a human brain within a computer. This might make a more human-like AGI, but has significant downsides - indirect simulation processing is relatively inefficient, and simulating human neurons is rather like copying Chinese when you can't read the language - you still have very little idea what's going on inside (conversely, one of the benefits is that such a simulation, since we could pause the simulation and trace the activity of every neuron, might actually help us gain a better understanding of our own brains). The religious reaction to such a human-like AGI seems easy enough to predict - there would be debates over whether such constructs have "souls," whether they are an example of "playing God," and so on.
Imagine that neuron-simulation AGI paves the way for human brain uploads - being able to transfer a human consciousness from a biological body to a computer. Would "uploaded" humans still retain human rights? If, once uploaded, such intelligences can modify themselves, would they tend to remain particularly "human" in their thought processes? Would they simply simulate for themselves a Utopian dreamworld like the Matrix and leave the rest of us behind? If survival simply means an available power source and an industrial base capable of replacing defective components, and if backups are possible, would this then not be a route to human immortality?
Other potential pathways involve actually creating an AGI from the ground up, not trying to copy the human brain at all. Some of these methods carry the benefit of being able to better understand what's happening in the AGI and how it works from day one, but may carry the downside of being wholly alien to the way we think. There's no requirement that an AGI be sentient, or contain self-preservation as a goal, and so on. Interacting with a machine that is quite literally smarter than you but is not self-aware in the way that you or I am, or which has such drastically different goals than humans generally follow (self-preservation, procreation, entertainment, etc) would be an interesting experience, to say the least. An "alien" AGI would likely face a steeper battle for any sort of rights; an AGI that isn't sentient would likely not even trigger much ethical or religious debate at all.
If an AGI is simply a complex computer program, it would be trivially simple to copy an AGI. Creating just one means that it should be possible to create as many as you like provided you can obtain the hardware to run them.
AGIs, not being bound to the requirements of flesh-and-blood bodies, could open up new horizons for space travel. A 1000-year long trip sounds less troublesome for an AGI whose lifespan would be measured in terms of available power and the occasional replacement part. Food, water and air would be unnecessary, as would exercise, space to move around, some radiation shielding, etc. An AGI could directly interface with all manner of sensors, not be limited to the senses of a human being; it could directly see x-rays rather than using false-color intermediary photographs, for example. A single AGI on a ship equipped with telepresence utility bodies (repair bots, rovers, probes, etc) could hypothetically carry out an entire interplanetary mission on its own. All it needs is enough fuel for transportation and power.
The Singularity, at its heart, represents the question, "what happens when we build something that's smarter than we are?" Will we be left behind as our intellectual children outpace us? Will we "upgrade" ourselves with cybernetic implants or upload our brains to keep up? Will the machines be friendly towards us, or will they try to kill us off as potential threats?
It's easy to predict things like easier space exploration given AGI. It's easy to predict some cultural challenges, like rights for AGIs, their place in society, religious reactions, etc.
It's difficult to predict exactly what happens to us. Many post-humanists base their predictions on the assumption that AGI would be able to solve many of our basic problems, like the energy crisis, through an exponential increase in technological advancement. While in general that assumption isn't too unlikely, the specifics are where they start to fail. The development of an AGI tomorrow doesn't necessarily put us on a 3-year plan to working fusion power reactors. If human "uploads" become possible, that says nothing about the cost of such a procedure or the hardware to run it.
I don't think we can accurately make predictions about the social consequences, simply because there are too many massively important variables. What is the AGI like? What are its core goals, and how do we fit into them? Can it change its goals? Would it? Can we reasonably pull the plug? What's the economic cost of running one? Will we be limited to only a few in the world because they need building-sized data centers and lots of power, or will they be as common as personal computers? Can human beings relate to them in our interactions, or are they so much different in their thought processes that they feel alien to us? Are they sentient?
That's why opinions of the Singularity are so varied. It's very likely that an AGI will be developed; it's very likely that an AGI could be our intellectual superior in every way, thinking faster, recalling data more accurately, being able to modify itself to overcome flaws without lengthy 12-step programs, and able to integrate new information quickly and easily. Everything beyond that, from HAL-9000 to the Culture to Terminator to the Matrix to a post-humanist Utopia is nearly pure speculation.

This message is a reply to:
 Message 1 by Phat, posted 02-12-2011 10:39 AM Phat has not replied

  
Rahvin
Member
Posts: 4039
Joined: 07-01-2005
Member Rating: 8.2


Message 95 of 169 (604835)
02-15-2011 11:34 AM
Reply to: Message 91 by CosmicChimp
02-15-2011 10:08 AM


If that kind of complexity were so close at hand for us here, then I would expect to see signs of the same but originating from elsewhere; other nearby planets. Assuming an affirmative in our case and elsewhere (where there should even be multiple instances) then why are we not seeing any signs of such AGI's? Is it because we are special or the first ones? No, we should expect to be right about in the middle of it all.
Ah, the Fermi Paradox. I've never seen it used as an argument against AI, though, and there's good reason.
AGI is irrelevant to the Fermi Paradox. You don't need AGI to be bleeding radio and other transmissions all over the Universe. We've been doing it since the 30s, long before we even conceived of such things as Turing tests. Hell, that was before transistors.
Why should we expect to see "signs" of extraterrestrial AGIs? Why would not seeing such signs be evidence against the possibility of AGI, as opposed to simply evidence against intelligent life existing? What does the existence or nonexistence of extraterrestrial AGI have to do at all with how close we are or are not to developing our own? I'm pretty sure that extraterrestrial solutions to the speed of light limit would have absolutely nothing to do with predicting how close we would be to our own solution. What signs do you think we'd even see?
It's interesting that you've conflated the Fermi Paradox, which is a form of evidence against the existence of extraterrestrial intelligent life (at least developed to the point of radio transmissions, and it assumes that some of the extraterrestrial intelligences should have reached that technological point early enough in time to have their speed-of-light transmissions reaching Earth) with a potential argument against the possibility of AGI. I'm curious - even if we are totally alone in the Universe, and there's no extraterrestrial intelligent life anywhere, what exactly does that have to do with whether AGI is possible or not?
Further, whether AGI's are spamming interstellar space with transmissions or are completely nonexistent, what does that have to do with whether we are close or far away from developing AGI?
You're looking for things that are irrelevant to the Singularity.
If you want to look for signs that we, here on Earth, may be approaching the dawn of AGI, then you need to look here, at Earth. You should expect to see scientists working on developing AGI. Those are definitely here. You should expect to see non-generalist applications of AI being developed and used in modern society. We have those, from Deep Blue to Google to video games.
But as to whether AGI is possible at all, well, there's simply not an effective argument against that. The human brain is a general intelligence, therefore general intelligences are possible. It's absurd to thing that it's impossible for humans to ever duplicate artificially what already exists in nature - therefore AGI must be possible, even if we haven't figured it out yet. To refute that basic chain of logic, you'd need to either prove that human minds are not general intelligences, or to show a mechanism that prevents humanity from ever duplicating a natural phenomenon. Good luck with that.

This message is a reply to:
 Message 91 by CosmicChimp, posted 02-15-2011 10:08 AM CosmicChimp has replied

Replies to this message:
 Message 96 by Straggler, posted 02-15-2011 11:41 AM Rahvin has replied
 Message 103 by xongsmith, posted 02-15-2011 8:41 PM Rahvin has not replied
 Message 105 by CosmicChimp, posted 02-16-2011 8:08 AM Rahvin has not replied

  
Rahvin
Member
Posts: 4039
Joined: 07-01-2005
Member Rating: 8.2


Message 97 of 169 (604837)
02-15-2011 11:53 AM
Reply to: Message 96 by Straggler
02-15-2011 11:41 AM


Re: 'Singualrity Moment Vs The Great Temptation
You can argue that Fermi's paradox and the question posed here are related. As is done in The XBox Challenge (Message 90)
At least in the sense of ‘The Great Temptation’ offering an interesting counter-point-alternative to the inevitability of the ‘Singularity Moment’ under discussion.
Which one of the two do I subscribe to? Probably neither. Maybe a bit of both. I dunno.
You cannot at all argue that no extraterrestrial transmissions have been picked up, therefore AGI is impossible. It's a blatant non sequitur. It's like saying, since we don't hear alien transmissions, it's impossible to develop computers at all.
You cannot at all argue that the presence or absence of extraterrestrial AGI means that we here on Earth are closer or farther away from developing our own AGI - unless aliens are intending to help us speed up the process, the two are completely separate.
Even if the so-called "Great Temptation" has affected every other species in the universe, it still has nothing whatsoever to do with whether or not AGI is possible, nor whether we on Earth are close or far from developing it.

This message is a reply to:
 Message 96 by Straggler, posted 02-15-2011 11:41 AM Straggler has replied

Replies to this message:
 Message 98 by Straggler, posted 02-15-2011 12:03 PM Rahvin has not replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024