Register | Sign In


Understanding through Discussion


EvC Forum active members: 45 (9208 total)
2 online now:
Newest Member: anil dahar
Post Volume: Total: 919,516 Year: 6,773/9,624 Month: 113/238 Week: 30/83 Day: 0/6 Hour: 0/0


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   AI - Do humans think algorithmically?
Straggler
Member (Idle past 326 days)
Posts: 10333
From: London England
Joined: 09-30-2006


Message 1 of 53 (887206)
07-24-2021 7:06 AM


What is the current state of AI and thinking on how humans think? Do we think algorithmically? Is the difference between a human brain and an artificial neural network simply one of degree and complexity with artificial “brains” destined to catch up and even overtake us eventually as technology develops. Or is there more to human intelligence than computational power? If so what?
Whilst AI is all around us now, from Siri to driverless cars, grandmaster-beating chess computers to algorithmically written novels, the notion of a truly thinking machine seems as far away as ever. Why is that? What is missing and when can we expect Skynet to achieve self awareness?

Replies to this message:
 Message 2 by nwr, posted 07-24-2021 9:26 AM Straggler has replied
 Message 3 by dwise1, posted 07-24-2021 12:17 PM Straggler has not replied
 Message 20 by riVeRraT, posted 08-10-2021 2:32 PM Straggler has not replied

  
nwr
Member
Posts: 6484
From: Geneva, Illinois
Joined: 08-08-2005
Member Rating: 9.2


(2)
Message 2 of 53 (887207)
07-24-2021 9:26 AM
Reply to: Message 1 by Straggler
07-24-2021 7:06 AM


What is the current state of AI ...
They have the "artificial" part working.
... and thinking on how humans think?
Muddled.
Do we think algorithmically?
Some people do some of the time. But, most of the time -- no.
Is the difference between a human brain and an artificial neural network simply one of degree and complexity with artificial “brains” destined to catch up and even overtake us eventually as technology develops. Or is there more to human intelligence than computational power? If so what?
The neural system is tuned for connectedness to reality. Computation is tuned for abstractness. They are poles apart.
(Just my opinions )

Fundamentalism - the anti-American, anti-Christian branch of American Christianity

This message is a reply to:
 Message 1 by Straggler, posted 07-24-2021 7:06 AM Straggler has replied

Replies to this message:
 Message 6 by Straggler, posted 07-26-2021 5:33 PM nwr has replied

  
dwise1
Member
Posts: 6077
Joined: 05-02-2006
Member Rating: 7.3


(1)
Message 3 of 53 (887208)
07-24-2021 12:17 PM
Reply to: Message 1 by Straggler
07-24-2021 7:06 AM


Two standard jokes/comments about AI:
  1. The problem with artificial intelligence is that there's nothing in nature to pattern it after.
  2. Cartoon of aliens leaving Earth in disgust reporting back, "No evidence of intelligence life found."
It's admittedly been decades since I've thought of this proble, so what I might have to say is old-school.
I read the BYTE book, "The Brains of Men and Machines", back when it was new (1981). The author's model for how the human brain works was a hierarchical structure. At the lowest level in the muscles was logical circuitry for opposing muscles such that when one muscle was contracting, the opposing muscle would be switched off (can be overridden, plus when you are working out it is advised that you exercise both groups (eg, biceps and triceps) since overdeveloping the one can result in injury to the other). Starting from the highest level in the central nervous system you start with the notion of doing something which gets translated through various levels into the actual signals for doing that something. "Muscle memory" is not in the muscles, but rather at those lower levels in the brain that need to be programmed to make those motions "automatic". The old truism that we only use 10% of our brains is utterly false (though we do only use a small fraction of our brain's potential), since so much of our brains form those lower levels in that hierarchy.
The walk-away of that is that the human brain is massively parallel (multiple processor, multiple data, denoted as MPMD), compared to the computer which is single processor, single data (SPSD). Two entirely different models, kind of like trying to achieve flight by constructing planes with flapping wings -- won't work for us, so we have to try a different approach (ie, fixed-wing, or even rotary-wing (BTW, helicopters don't work like you might think they do)). We can do SPMD (very popular) or even a limited form of MPMD (but we're not smart enough to properly divide the work up amongst too many processors, so throwing more and more processors at a problem doesn't translate into doing more and more work; most of those extra processors just sit there idle).
BTW, our massively parallel processing power can explain "intuition". And since women's brains are better developed for parallel processing that could help to explain "feminine intuition." Or not, or whatever. Just a parallelly processed thought.
Others have remarked that computers are very good at tasks that humans find daunting (eg, massively long strings of arithmetical calculations or searching through mountains of data to find matches or patterns, and yet they are very sorely vexed to do what any human child a few years old can do easily, understand a simple sentence in its native language.
{SIDE NOTE:
Back when international communications were still primitive and young gymnast Olga Korbut was still an international Olympics star (so in or around the 70's), a Bob Hope special set up a special TV link for him to interview her -- I remember it well. He tried to joke with her, but it didn't quite work. He had just published a book about his travels in the Soviet Union, so he remarked to her how impressed he was with the intelligence of the children he had met since at such a tender age they could speak Russian (a very daunting task for any American adult such as himself, ну? I remember her, not understanding the joke, nervously looking at her interpreter for some kind of explanation.
A few years later, Bob Hope named our first dog, a Sheltie/Husky mix. At first he looked like a Husky, but then all the fluff fell off to reveal his collie appearance, though we named him before that reveal. We had wanted to get his sibling without foot markings whom we called "Barefoot", but he had been taken and this one had markings on all four feet. We tried various names but none seemed to fit. Then on a Bob Hope TV special about his many USO tours, he remarked that it was time for him to put on his mukluks again and go to Greenland. That was the moment that Mukluk got his name.
}
Regarding neural networks (the implementation of which I have never understood), they are supposed to be great at learning pattern recognition, though like young children you have to be very careful about what you tell them.
The case in point was an neural net that was trained to spot tanks in forests (something they really would have needed in Operation Market Garden). They taught it with photo after photo and it learned extremely well. Then they fed it real-life photos and it failed miserably. The problem was that the training photos with the tanks were taken on a sunny day and the ones without the tanks were taken on a cloudy day (or the other way around), so the neural net learned the wrong lessons.
Another word for a human algorithm is bureaucracy. Or at least rules, regulations, laws, etc, which are meant to cover all possible situations. And which fail when some new situation arises.
For example, the USAF's regulations about cross-training was that after a certain number of months you would qualify. I was one semester short of completing my BS Computer Science two years short of the end of my enlistment, right within the window of that regulation. The Air Force had paid for most of my degree. So when I tried to cross over from computer hardware repairman to programmer, they stopped me dead in my tracks. The regs had been changed. That reg had originally been written with a 4-year initial enlistment in mind. I was a six-year enlistee, a new deal not thought of in the writing of those regs. So they had to rewrite the regs and I couldn't qualify. So the Air Force didn't want any return on its investment in my education. Fine, I just pursued the rest of my military career in the reserves.
The point is that we humans are imperfect and our rules (AKA "algorithms") are even more imperfect. That is why we must continually tweak them, correct them.
 
So when you construct an algorithm, you must think of all the possible situations and have a solution for each and every one of them. Which is virtually impossible.
That is the whole reason for AI, to build a system that can figure out those unforeseen situations all on its own. But that's not easy to design.
Plus we have no way of foreseeing what solutions those AI systems will come up with. Such as Skynet.
 
Those who fail to learn the lessons of science fiction are doomed to live them.

This message is a reply to:
 Message 1 by Straggler, posted 07-24-2021 7:06 AM Straggler has not replied

Replies to this message:
 Message 4 by anglagard, posted 07-24-2021 1:50 PM dwise1 has not replied

  
anglagard
Member (Idle past 1097 days)
Posts: 2339
From: Socorro, New Mexico USA
Joined: 03-18-2006


(1)
Message 4 of 53 (887209)
07-24-2021 1:50 PM
Reply to: Message 3 by dwise1
07-24-2021 12:17 PM


Why AI doesn't Woek as Advertized
dwis1 writes:
Others have remarked that computers are very good at tasks that humans find daunting (eg, massively long strings of arithmetical calculations or searching through mountains of data to find matches or patterns, and yet they are very sorely vexed to do what any human child a few years old can do easily, understand a simple sentence in its native language.
This where I first dealt with AI, a senior project on grammar checkers concerning abilities and definite limitations. I have had a working hypothesis for decades concerning the triune brain and how it functions compared to the AI "brain."
The big difference is the human mind communicates, not just electrically but also chemically. These functions result in emotions. Some humans lack emotion and therefore have a very difficult time achieving motivation. Without motivation, computers can beat everyone at chess, find new new solutions to Euclid proofs, and write Harlequin Romances but it can't discover relativity, quantum physics, or write Les Miserables or War and Peace. In addition isn't AI of a completely different nature. Perhaps a form of intelligence but certainly not human intelligence, not even close.
I know this will upset CS folks, please show me where I am wrong, as I'm sure you will.

The problem with knowing everything is learning nothing.

If you don't know what you're doing, find someone who does, and do what they do.

Republican = death


This message is a reply to:
 Message 3 by dwise1, posted 07-24-2021 12:17 PM dwise1 has not replied

  
Tangle
Member
Posts: 9583
From: UK
Joined: 10-07-2011
Member Rating: 6.7


Message 5 of 53 (887210)
07-24-2021 6:09 PM


Heard
Heard somebody say that the toughest intellectual challenge we can think - of like playing chess or Go - is easy for AI but ask it to do what any baby can do, like tell a fox from a cat, and it's fucked.
Edited by Tangle, : No reason given.

Je suis Charlie. Je suis Ahmed. Je suis Juif. Je suis Parisien. I am Mancunian. I am Brum. I am London.I am Finland. Soy Barcelona

"Life, don't talk to me about life" - Marvin the Paranoid Android

"Science adjusts it's views based on what's observed.
Faith is the denial of observation so that Belief can be preserved."
- Tim Minchin, in his beat poem, Storm.


Replies to this message:
 Message 7 by AnswersInGenitals, posted 07-26-2021 5:58 PM Tangle has replied

  
Straggler
Member (Idle past 326 days)
Posts: 10333
From: London England
Joined: 09-30-2006


(1)
Message 6 of 53 (887244)
07-26-2021 5:33 PM
Reply to: Message 2 by nwr
07-24-2021 9:26 AM


The consensus seems to be that things like self awareness and more human modes of thought are simply a matter of complexity and processing power. The term “emergent properties” seems to be the phrase used to express the assumption that these things will just occur if enough complexity/processing-power is present.
It’s intetesting that nobody here does seem to think that AI is destined to reach these things on that basis.

This message is a reply to:
 Message 2 by nwr, posted 07-24-2021 9:26 AM nwr has replied

Replies to this message:
 Message 9 by nwr, posted 07-26-2021 6:31 PM Straggler has not replied
 Message 10 by Tanypteryx, posted 07-26-2021 9:53 PM Straggler has replied
 Message 41 by Son Goku, posted 10-01-2021 8:12 PM Straggler has not replied

  
AnswersInGenitals
Member (Idle past 411 days)
Posts: 673
Joined: 07-20-2006


Message 7 of 53 (887245)
07-26-2021 5:58 PM
Reply to: Message 5 by Tangle
07-24-2021 6:09 PM


Re: Heard
Show a baby/infant a fox or cat or a picture of a fox or cat and they will say "dogie". It takes a bit of training before they say "Urocyon cinereoargenteus" or "Felis catus".

This message is a reply to:
 Message 5 by Tangle, posted 07-24-2021 6:09 PM Tangle has replied

Replies to this message:
 Message 8 by Tangle, posted 07-26-2021 6:14 PM AnswersInGenitals has replied

  
Tangle
Member
Posts: 9583
From: UK
Joined: 10-07-2011
Member Rating: 6.7


(1)
Message 8 of 53 (887247)
07-26-2021 6:14 PM
Reply to: Message 7 by AnswersInGenitals
07-26-2021 5:58 PM


Re: Heard
But babies learn to distinguish a cat from a dog before they can add up while AI can beat a chess grandmaster and still can't do the simple things a baby can do. we're a very long way off.

Je suis Charlie. Je suis Ahmed. Je suis Juif. Je suis Parisien. I am Mancunian. I am Brum. I am London.I am Finland. Soy Barcelona

"Life, don't talk to me about life" - Marvin the Paranoid Android

"Science adjusts it's views based on what's observed.
Faith is the denial of observation so that Belief can be preserved."
- Tim Minchin, in his beat poem, Storm.


This message is a reply to:
 Message 7 by AnswersInGenitals, posted 07-26-2021 5:58 PM AnswersInGenitals has replied

Replies to this message:
 Message 12 by AnswersInGenitals, posted 07-27-2021 1:48 PM Tangle has not replied

  
nwr
Member
Posts: 6484
From: Geneva, Illinois
Joined: 08-08-2005
Member Rating: 9.2


(4)
Message 9 of 53 (887248)
07-26-2021 6:31 PM
Reply to: Message 6 by Straggler
07-26-2021 5:33 PM


The consensus seems to be that things like self awareness and more human modes of thought are simply a matter of complexity and processing power.
The consensus can be wrong.
It was already Turing's idea, back in 1950, that computation (or logic) could do the job). All it needed was more compute power and more memory. Successful AI was just around the corner.
So here we are, 70 years later. We have far more memory and compute power that Turing could have imagined. But successful AI is still "just around the corner".
There is a huge religion of "logic worship" out there. I am an atheist with respect to that religion. I see logic as a useful tool, but nothing more than that.

Fundamentalism - the anti-American, anti-Christian branch of American Christianity

This message is a reply to:
 Message 6 by Straggler, posted 07-26-2021 5:33 PM Straggler has not replied

  
Tanypteryx
Member
Posts: 4597
From: Oregon, USA
Joined: 08-27-2006
Member Rating: 9.1


(4)
Message 10 of 53 (887250)
07-26-2021 9:53 PM
Reply to: Message 6 by Straggler
07-26-2021 5:33 PM


The term “emergent properties” seems to be the phrase used to express the assumption that these things will just occur if enough complexity/processing-power is present.
"Emergent properties" is the term that pops into my mind when I think about human consciousness, self awareness, or the mind. To me, enough complexity, is too vague a term to provide a satisfactory description.
I think that sensory input is largely ignored or given far less credit for its influence on those emergent properties. Teaching computers to see has faced major hurdles and has been far more rudimentary than the way baby humans process visual input, plus all those other senses, taste, touch, hearing and smell. Much of the input from these senses is processed in non-infants without our conscious awareness and what we are aware of has been highly processed and blended to give us a seamless experience of our surroundings.
Our technology is a long ways from developing the algorithms to describe all the subtle variations of sensory input and how they are blended into conscious experience of the world. Our inventions cannot even approximate the sensory input and processing power of the simplest mammals or birds.

What if Eleanor Roosevelt had wings? -- Monty Python

One important characteristic of a theory is that is has survived repeated attempts to falsify it. Contrary to your understanding, all available evidence confirms it. --Subbie

If evolution is shown to be false, it will be at the hands of things that are true, not made up. --percy

The reason that we have the scientific method is because common sense isn't reliable. -- Taq


This message is a reply to:
 Message 6 by Straggler, posted 07-26-2021 5:33 PM Straggler has replied

Replies to this message:
 Message 11 by Straggler, posted 07-27-2021 1:53 AM Tanypteryx has not replied

  
Straggler
Member (Idle past 326 days)
Posts: 10333
From: London England
Joined: 09-30-2006


Message 11 of 53 (887251)
07-27-2021 1:53 AM
Reply to: Message 10 by Tanypteryx
07-26-2021 9:53 PM


So, to take this a bit further, what is missing and how can we try to add it in to set us on the path to actual successful AI?
Consider something as “simple” as a mosquito. It barely has a brain at all but is able to navigate a potentially hostile world pretty successfully on sensory input and an instinct for self preservation. Navigate the world in ways that I don’t think the most sophisticated current AI can achieve.
If we take sensory input, self preservation, goals like survival and procreation and then layer complexity and processing power over those do we start to build up to more human like intelligence?
Is a sense of self (and ultimately self awareness) a logical extrapolation to the goal of self preservation?
In more complex social animals (dogs, apes etc.) is not just a sense of self but a sense of other “minds” with their own competing or cooperative agency essential for navigating the social environment?
Can we look at different stages of brain complexity and processing power in different animals and see the sort of things that drive the kind of intelligence we might hope to achieve with AI (self awareness, possession of a theory of mind) and learn from those the direction in which to take AI?
Can we look at what evolution has done over millenia and shortcut it to create intelligent machines?

This message is a reply to:
 Message 10 by Tanypteryx, posted 07-26-2021 9:53 PM Tanypteryx has not replied

Replies to this message:
 Message 13 by nwr, posted 07-27-2021 3:02 PM Straggler has replied

  
AnswersInGenitals
Member (Idle past 411 days)
Posts: 673
Joined: 07-20-2006


Message 12 of 53 (887255)
07-27-2021 1:48 PM
Reply to: Message 8 by Tangle
07-26-2021 6:14 PM


Re: Heard
And how long did it take evolution to develop that capability?

This message is a reply to:
 Message 8 by Tangle, posted 07-26-2021 6:14 PM Tangle has not replied

  
nwr
Member
Posts: 6484
From: Geneva, Illinois
Joined: 08-08-2005
Member Rating: 9.2


Message 13 of 53 (887256)
07-27-2021 3:02 PM
Reply to: Message 11 by Straggler
07-27-2021 1:53 AM


Can we look at what evolution has done over millenia and shortcut it to create intelligent machines?
We won't, because we don't want that kind of intelligent machine.
We want machines that think for us. We don't want machines that have a mind of their own.

Fundamentalism - the anti-American, anti-Christian branch of American Christianity

This message is a reply to:
 Message 11 by Straggler, posted 07-27-2021 1:53 AM Straggler has replied

Replies to this message:
 Message 14 by Straggler, posted 08-02-2021 5:57 PM nwr has replied

  
Straggler
Member (Idle past 326 days)
Posts: 10333
From: London England
Joined: 09-30-2006


Message 14 of 53 (887413)
08-02-2021 5:57 PM
Reply to: Message 13 by nwr
07-27-2021 3:02 PM


Well you say that but….
Why do futurists keep thinking that computers with human level intelligence and beyond are always just around the corner? Why do we get excited when a computer finally beats the best human chess player and treat this as some sort of milestone? Why do we create robots that walk around on two legs and get excited when they can climb stairs, or robots that mimic human facial expressions? Why does the Turing test rely on being unable to distinguish between human and AI or numerous research programmes seek to replicate the workings of he human brain by mimicing neural networks or seeking the sort of ‘parallel processing’ found in organic brains? Research into AI theory of mind and multiple other examples plus the (often hyperbolic) claims of those in the field about the imminence of replicating human like thought all suggest that we are very interested in machines that are capable of human like cognition.

This message is a reply to:
 Message 13 by nwr, posted 07-27-2021 3:02 PM nwr has replied

Replies to this message:
 Message 15 by nwr, posted 08-02-2021 6:43 PM Straggler has not replied
 Message 16 by AZPaul3, posted 08-02-2021 8:22 PM Straggler has not replied

  
nwr
Member
Posts: 6484
From: Geneva, Illinois
Joined: 08-08-2005
Member Rating: 9.2


Message 15 of 53 (887414)
08-02-2021 6:43 PM
Reply to: Message 14 by Straggler
08-02-2021 5:57 PM


Why do futurists keep thinking that computers with human level intelligence and beyond are always just around the corner?
The conventional wisdom seems to be that intelligence is just logic; that thinking is just the use of logic.
The conventional wisdom is horribly wrong about this. But it is so deeply entrenched, that few people question it.

Fundamentalism - the anti-American, anti-Christian branch of American Christianity

This message is a reply to:
 Message 14 by Straggler, posted 08-02-2021 5:57 PM Straggler has not replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024