|
Register | Sign In |
|
QuickSearch
Thread ▼ Details |
Member (Idle past 362 days) Posts: 10333 From: London England Joined: |
|
Thread Info
|
|
|
Author | Topic: AI - Do humans think algorithmically? | |||||||||||||||||||||||||||||||||||||||
AZPaul3 Member Posts: 8685 From: Phoenix Joined: Member Rating: 6.1
|
Why do futurists keep thinking that computers with human level intelligence and beyond are always just around the corner? Most of it, I think, is to sell books.
Why do we get excited when a computer finally beats the best human chess player and treat this as some sort of milestone? Why do we create robots that walk around on two legs and get excited when they can climb stairs, or robots that mimic human facial expressions? Because we humans, spurred by Hollywood SciFi, from Bender to Data, love us some good robot servants and play toys and with each new development we can see that coming to be.
Why does the Turing test rely on being unable to distinguish between human and AI or numerous research programmes seek to replicate the workings of he human brain by mimicing neural networks or seeking the sort of ‘parallel processing’ found in organic brains? Because the human mind is the A-number-1 example of intellect in our universe and, maybe, just maybe, we can learn better computer architectures by studying from the best. Neural nets, modeled on our brain studies, have become an important tool in the advanced computer landscape. The new phase change memory, also modeled on brain architecture, will give neural nets a turbo charge.
Research into AI theory of mind and multiple other examples plus the (often hyperbolic) claims of those in the field about the imminence of replicating human like thought all suggest that we are very interested in machines that are capable of human like cognition. Modeling the human brain in all its wondrous capabilities is the Holy Grail for the techno-nerds and fiction writers but the goals of business and government may be something less. Machines for super specific cognitive tasks narrowly focused would propagate like rabbits across the military, industry and eventually, the rest of society. Nwr is spot on. We’re not looking for overlords. We're looking for intellegent but limited slaves.Eschew obfuscation. Habituate elucidation.
|
|||||||||||||||||||||||||||||||||||||||
Tanypteryx Member Posts: 4597 From: Oregon, USA Joined:
|
Nwr is spot on. We’re not looking for overlords. We're looking for intelligent but limited slaves. Yep, I agree with that. But, I think a lot of what is driving the fantasy of a human construct in science fiction and Nerdia is the desire for a companion species that has full communication capacity and self awareness like we experience. We are alone on this planet. Our communications with the most intelligent dozen or so other species are rudimentary. The UFO aliens have really let us down. These hypothetical AI robots would be able to communicate right from the moment they are switched on. The question soon arises, if they are self-aware will they automatically also be independent of human control? That's something humans should discuss a lot. In every Scifi world where the AI is one or a few bigass computers that have control of a bunch of machines or robots or weapon systems it only ends well for humans if the laws of physics are broken a dozen times in the course of the story. We should not build this kind of AI.What if Eleanor Roosevelt had wings? -- Monty Python One important characteristic of a theory is that is has survived repeated attempts to falsify it. Contrary to your understanding, all available evidence confirms it. --Subbie If evolution is shown to be false, it will be at the hands of things that are true, not made up. --percy The reason that we have the scientific method is because common sense isn't reliable. -- Taq
|
|||||||||||||||||||||||||||||||||||||||
AZPaul3 Member Posts: 8685 From: Phoenix Joined: Member Rating: 6.1
|
The question soon arises, if they are self-aware will they automatically also be independent of human control? Eventually, maybe so. Unintended consequences and all that. But I see humanity, should we survive long enough, limiting the reach, indeed the intellect, of most AI units tuned to specific tasks. Super intelligent aircraft carriers, interplanetary space probes, brain implants. I could use an internal knowledge assistant. Remember HAL-9000 from 2001: A Space Odyssey and his sister SAL-9000 from the later mission? Reaching that level of AI is the level at which we will say we have it and still maintain overall control. The bad parts are going to be the super intelligent market traders and the constant blare of the super intelligent sales and propaganda outlets. Fake news on steroids. Edited by AZPaul3, : No reason given.Eschew obfuscation. Habituate elucidation.
|
|||||||||||||||||||||||||||||||||||||||
Tanypteryx Member Posts: 4597 From: Oregon, USA Joined:
|
The bad parts are going to be the super intelligent market traders and the constant blare of the super intelligent sales and propaganda outlets. Fake news on steroids. Yeah you can think of it running amuck in almost every field of commerce and exploiting the mountain of data that define the digital you and the digital me. We pretty much deserve the din! Our decision makers in the U.S. have made clear decisions that they would rather risk million dollar drones than American pilots and jets. The compulsion to replace human combatants with machines has got to be occurring to asshole politicians around the world. Weapon systems are already some of the biggest business deals on the planet.What if Eleanor Roosevelt had wings? -- Monty Python One important characteristic of a theory is that is has survived repeated attempts to falsify it. Contrary to your understanding, all available evidence confirms it. --Subbie If evolution is shown to be false, it will be at the hands of things that are true, not made up. --percy The reason that we have the scientific method is because common sense isn't reliable. -- Taq
|
|||||||||||||||||||||||||||||||||||||||
riVeRraT Member (Idle past 712 days) Posts: 5788 From: NY USA Joined: |
Do we think algorithmically? Isn't AI and algorithms 2 different things? Would AI ever get depressed and commit suicide?
|
|||||||||||||||||||||||||||||||||||||||
AZPaul3 Member Posts: 8685 From: Phoenix Joined: Member Rating: 6.1
|
Isn't AI and algorithms 2 different things? We don't know yet. What will achieve a sentient AI? Will we need a new technology (biofilms, qbits) or will an AI arise from the synergy of billions upon billions of algorithms etched onto silicon chips all processed on massive beds of neural nets?
Would AI ever get depressed and commit suicide? The depression, yes. Marvin the Paranoid Android shows us that. And robot suicide is well within their family makeup from very early in their evolution. Marvin the Paranoid Android - WikipediaRobot is gifted with intelligence, immediately commits suicide Edited by AZPaul3, : No reason given.Eschew obfuscation. Habituate elucidation.
|
|||||||||||||||||||||||||||||||||||||||
riVeRraT Member (Idle past 712 days) Posts: 5788 From: NY USA Joined:
|
Marvin and Fry are not real.
True sentience in a robot cannot be reached unless we give it free will, like God gave us.
|
|||||||||||||||||||||||||||||||||||||||
nwr Member Posts: 6487 From: Geneva, Illinois Joined: |
True sentience in a robot cannot be reached unless we give it free will
And that's not likely to happen.Fundamentalism - the anti-American, anti-Christian branch of American Christianity
|
|||||||||||||||||||||||||||||||||||||||
jar Member (Idle past 135 days) Posts: 34140 From: Texas!! Joined: |
For humans or robots.
My Website: My Website
|
|||||||||||||||||||||||||||||||||||||||
nwr Member Posts: 6487 From: Geneva, Illinois Joined: |
For humans or robots.
LOL. My comment was, of course, intended to be about robots. Whether humans have free will is much debated, but lets try to avoid opening that can of worms.Fundamentalism - the anti-American, anti-Christian branch of American Christianity
|
|||||||||||||||||||||||||||||||||||||||
AnswersInGenitals Member (Idle past 447 days) Posts: 673 Joined:
|
Would AI ever get depressed and commit suicide? We already build systems, both smart and dumb, that are designed to commit suicide, or as we prefer to say, self-destruct. This is often done in weapons systems to keep their capabilities from falling into enemy hands. But also, many software algorithms have a module designed so that if there is an improper attempt to activate them they don’t just say “access denied” but initiate a routine to erase themselves. I worked on several satellite systems that, as they neared end of life, would de-orbit into deep ocean reentry, i. e., would drown themselves, to assure the technology could not be recovered by unfriendlies (i. e., Russia). Almost all biological cells, from bacteria to the cells in our bodies, have a built-in genetic routine to self-destruct through the process of apoptosis (Greek for self poisoning). So, when one of our cells in infected by a bacterium or virus or becomes precancerous it has the ability to detect that its reproductive process is screwed up , hijacked by the invader, and doesn’t just destroy itself, but also sends out chemical signals, cytokines, to attract leucocytes to come over and clean up the debris. This is happening to thousands of the cells for each of us at all times. Imagine if our society worked that way: each for us would be testing ourselves several time a day for Covid-19 infection, and as soon as we found we were infected we would down a dose of strychnine, but not before calling the suicide hot line to come pick up the body for incineration. This would certainly quell the spread of Covid-19, but since we are constantly being invaded by hundreds of pathogens, it would also quell the existence of humanity. Our cells, of which we have about 30 trillion and can be replaced in a few hours, are more expendable. So, yes, an advanced AI/android society, optimized for survival of the society itself, would certainly have latent suicidal tendencies. And as with our technological systems, biological cells, and those software routines, there would most probably be frequent unintended incidents of self destruction. There might even be Jim Jones and Heaven’s Gate style mass suicides.
|
|||||||||||||||||||||||||||||||||||||||
AZPaul3 Member Posts: 8685 From: Phoenix Joined: Member Rating: 6.1 |
For humans or robots. LOL.My comment was, of course, intended to be about robots. {reaching for can opener} Since he made it a statement instead of a question that allows me take it as meaning both. We can't hand out sentience to silicon any more than some god could with carbon. Edited by AZPaul3, : No reason given.Edited by AZPaul3, : No reason given. Eschew obfuscation. Habituate elucidation.
|
|||||||||||||||||||||||||||||||||||||||
jar Member (Idle past 135 days) Posts: 34140 From: Texas!! Joined: |
My Website: My Website
|
|||||||||||||||||||||||||||||||||||||||
riVeRraT Member (Idle past 712 days) Posts: 5788 From: NY USA Joined: |
Self destruct sequences are not "suicide" and are not initiated by depression.
|
|||||||||||||||||||||||||||||||||||||||
AZPaul3 Member Posts: 8685 From: Phoenix Joined: Member Rating: 6.1 |
So says the human all pink and squishy on the inside. Do you do robot therapy on the side? How do you know what motivates the silicon mind?
Eschew obfuscation. Habituate elucidation.
|
|
|
Do Nothing Button
Copyright 2001-2023 by EvC Forum, All Rights Reserved
Version 4.2
Innovative software from Qwixotic © 2025