Understanding through Discussion

Welcome! You are not logged in. [ Login ]
EvC Forum active members: 66 (9078 total)
755 online now:
dwise1, nwr, PaulK, Phat, Theodoric (5 members, 750 visitors)
Newest Member: harveyspecter
Post Volume: Total: 895,320 Year: 6,432/6,534 Month: 625/650 Week: 163/232 Day: 9/39 Hour: 0/0

Thread  Details

Email This Thread
Newer Topic | Older Topic
Author Topic:   AI - Do humans think algorithmically?
Posts: 5199
Joined: 05-02-2006
Member Rating: 3.0

Message 3 of 53 (887208)
07-24-2021 12:17 PM
Reply to: Message 1 by Straggler
07-24-2021 7:06 AM

Two standard jokes/comments about AI:
  1. The problem with artificial intelligence is that there's nothing in nature to pattern it after.

  2. Cartoon of aliens leaving Earth in disgust reporting back, "No evidence of intelligence life found."

It's admittedly been decades since I've thought of this proble, so what I might have to say is old-school.

I read the BYTE book, "The Brains of Men and Machines", back when it was new (1981). The author's model for how the human brain works was a hierarchical structure. At the lowest level in the muscles was logical circuitry for opposing muscles such that when one muscle was contracting, the opposing muscle would be switched off (can be overridden, plus when you are working out it is advised that you exercise both groups (eg, biceps and triceps) since overdeveloping the one can result in injury to the other). Starting from the highest level in the central nervous system you start with the notion of doing something which gets translated through various levels into the actual signals for doing that something. "Muscle memory" is not in the muscles, but rather at those lower levels in the brain that need to be programmed to make those motions "automatic". The old truism that we only use 10% of our brains is utterly false (though we do only use a small fraction of our brain's potential), since so much of our brains form those lower levels in that hierarchy.

The walk-away of that is that the human brain is massively parallel (multiple processor, multiple data, denoted as MPMD), compared to the computer which is single processor, single data (SPSD). Two entirely different models, kind of like trying to achieve flight by constructing planes with flapping wings -- won't work for us, so we have to try a different approach (ie, fixed-wing, or even rotary-wing (BTW, helicopters don't work like you might think they do)). We can do SPMD (very popular) or even a limited form of MPMD (but we're not smart enough to properly divide the work up amongst too many processors, so throwing more and more processors at a problem doesn't translate into doing more and more work; most of those extra processors just sit there idle).

BTW, our massively parallel processing power can explain "intuition". And since women's brains are better developed for parallel processing that could help to explain "feminine intuition." Or not, or whatever. Just a parallelly processed thought.

Others have remarked that computers are very good at tasks that humans find daunting (eg, massively long strings of arithmetical calculations or searching through mountains of data to find matches or patterns, and yet they are very sorely vexed to do what any human child a few years old can do easily, understand a simple sentence in its native language.


Back when international communications were still primitive and young gymnast Olga Korbut was still an international Olympics star (so in or around the 70's), a Bob Hope special set up a special TV link for him to interview her -- I remember it well. He tried to joke with her, but it didn't quite work. He had just published a book about his travels in the Soviet Union, so he remarked to her how impressed he was with the intelligence of the children he had met since at such a tender age they could speak Russian (a very daunting task for any American adult such as himself, ну? I remember her, not understanding the joke, nervously looking at her interpreter for some kind of explanation.

A few years later, Bob Hope named our first dog, a Sheltie/Husky mix. At first he looked like a Husky, but then all the fluff fell off to reveal his collie appearance, though we named him before that reveal. We had wanted to get his sibling without foot markings whom we called "Barefoot", but he had been taken and this one had markings on all four feet. We tried various names but none seemed to fit. Then on a Bob Hope TV special about his many USO tours, he remarked that it was time for him to put on his mukluks again and go to Greenland. That was the moment that Mukluk got his name.


Regarding neural networks (the implementation of which I have never understood), they are supposed to be great at learning pattern recognition, though like young children you have to be very careful about what you tell them.

The case in point was an neural net that was trained to spot tanks in forests (something they really would have needed in Operation Market Garden). They taught it with photo after photo and it learned extremely well. Then they fed it real-life photos and it failed miserably. The problem was that the training photos with the tanks were taken on a sunny day and the ones without the tanks were taken on a cloudy day (or the other way around), so the neural net learned the wrong lessons.

Another word for a human algorithm is bureaucracy. Or at least rules, regulations, laws, etc, which are meant to cover all possible situations. And which fail when some new situation arises.

For example, the USAF's regulations about cross-training was that after a certain number of months you would qualify. I was one semester short of completing my BS Computer Science two years short of the end of my enlistment, right within the window of that regulation. The Air Force had paid for most of my degree. So when I tried to cross over from computer hardware repairman to programmer, they stopped me dead in my tracks. The regs had been changed. That reg had originally been written with a 4-year initial enlistment in mind. I was a six-year enlistee, a new deal not thought of in the writing of those regs. So they had to rewrite the regs and I couldn't qualify. So the Air Force didn't want any return on its investment in my education. Fine, I just pursued the rest of my military career in the reserves.

The point is that we humans are imperfect and our rules (AKA "algorithms") are even more imperfect. That is why we must continually tweak them, correct them.
So when you construct an algorithm, you must think of all the possible situations and have a solution for each and every one of them. Which is virtually impossible.

That is the whole reason for AI, to build a system that can figure out those unforeseen situations all on its own. But that's not easy to design.

Plus we have no way of foreseeing what solutions those AI systems will come up with. Such as Skynet.

Those who fail to learn the lessons of science fiction are doomed to live them.

This message is a reply to:
 Message 1 by Straggler, posted 07-24-2021 7:06 AM Straggler has not replied

Replies to this message:
 Message 4 by anglagard, posted 07-24-2021 1:50 PM dwise1 has not replied

Newer Topic | Older Topic
Jump to:

Copyright 2001-2018 by EvC Forum, All Rights Reserved

™ Version 4.1
Innovative software from Qwixotic © 2022