Register | Sign In


Understanding through Discussion


EvC Forum active members: 64 (9164 total)
4 online now:
Newest Member: ChatGPT
Post Volume: Total: 916,767 Year: 4,024/9,624 Month: 895/974 Week: 222/286 Day: 29/109 Hour: 2/3


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   AI - Do humans think algorithmically?
Straggler
Member (Idle past 91 days)
Posts: 10333
From: London England
Joined: 09-30-2006


Message 1 of 53 (887206)
07-24-2021 7:06 AM


What is the current state of AI and thinking on how humans think? Do we think algorithmically? Is the difference between a human brain and an artificial neural network simply one of degree and complexity with artificial “brains” destined to catch up and even overtake us eventually as technology develops. Or is there more to human intelligence than computational power? If so what?
Whilst AI is all around us now, from Siri to driverless cars, grandmaster-beating chess computers to algorithmically written novels, the notion of a truly thinking machine seems as far away as ever. Why is that? What is missing and when can we expect Skynet to achieve self awareness?

Replies to this message:
 Message 2 by nwr, posted 07-24-2021 9:26 AM Straggler has replied
 Message 3 by dwise1, posted 07-24-2021 12:17 PM Straggler has not replied
 Message 20 by riVeRraT, posted 08-10-2021 2:32 PM Straggler has not replied

  
Straggler
Member (Idle past 91 days)
Posts: 10333
From: London England
Joined: 09-30-2006


(1)
Message 6 of 53 (887244)
07-26-2021 5:33 PM
Reply to: Message 2 by nwr
07-24-2021 9:26 AM


The consensus seems to be that things like self awareness and more human modes of thought are simply a matter of complexity and processing power. The term “emergent properties” seems to be the phrase used to express the assumption that these things will just occur if enough complexity/processing-power is present.
It’s intetesting that nobody here does seem to think that AI is destined to reach these things on that basis.

This message is a reply to:
 Message 2 by nwr, posted 07-24-2021 9:26 AM nwr has replied

Replies to this message:
 Message 9 by nwr, posted 07-26-2021 6:31 PM Straggler has not replied
 Message 10 by Tanypteryx, posted 07-26-2021 9:53 PM Straggler has replied
 Message 41 by Son Goku, posted 10-01-2021 8:12 PM Straggler has not replied

  
Straggler
Member (Idle past 91 days)
Posts: 10333
From: London England
Joined: 09-30-2006


Message 11 of 53 (887251)
07-27-2021 1:53 AM
Reply to: Message 10 by Tanypteryx
07-26-2021 9:53 PM


So, to take this a bit further, what is missing and how can we try to add it in to set us on the path to actual successful AI?
Consider something as “simple” as a mosquito. It barely has a brain at all but is able to navigate a potentially hostile world pretty successfully on sensory input and an instinct for self preservation. Navigate the world in ways that I don’t think the most sophisticated current AI can achieve.
If we take sensory input, self preservation, goals like survival and procreation and then layer complexity and processing power over those do we start to build up to more human like intelligence?
Is a sense of self (and ultimately self awareness) a logical extrapolation to the goal of self preservation?
In more complex social animals (dogs, apes etc.) is not just a sense of self but a sense of other “minds” with their own competing or cooperative agency essential for navigating the social environment?
Can we look at different stages of brain complexity and processing power in different animals and see the sort of things that drive the kind of intelligence we might hope to achieve with AI (self awareness, possession of a theory of mind) and learn from those the direction in which to take AI?
Can we look at what evolution has done over millenia and shortcut it to create intelligent machines?

This message is a reply to:
 Message 10 by Tanypteryx, posted 07-26-2021 9:53 PM Tanypteryx has not replied

Replies to this message:
 Message 13 by nwr, posted 07-27-2021 3:02 PM Straggler has replied

  
Straggler
Member (Idle past 91 days)
Posts: 10333
From: London England
Joined: 09-30-2006


Message 14 of 53 (887413)
08-02-2021 5:57 PM
Reply to: Message 13 by nwr
07-27-2021 3:02 PM


Well you say that but….
Why do futurists keep thinking that computers with human level intelligence and beyond are always just around the corner? Why do we get excited when a computer finally beats the best human chess player and treat this as some sort of milestone? Why do we create robots that walk around on two legs and get excited when they can climb stairs, or robots that mimic human facial expressions? Why does the Turing test rely on being unable to distinguish between human and AI or numerous research programmes seek to replicate the workings of he human brain by mimicing neural networks or seeking the sort of ‘parallel processing’ found in organic brains? Research into AI theory of mind and multiple other examples plus the (often hyperbolic) claims of those in the field about the imminence of replicating human like thought all suggest that we are very interested in machines that are capable of human like cognition.

This message is a reply to:
 Message 13 by nwr, posted 07-27-2021 3:02 PM nwr has replied

Replies to this message:
 Message 15 by nwr, posted 08-02-2021 6:43 PM Straggler has not replied
 Message 16 by AZPaul3, posted 08-02-2021 8:22 PM Straggler has not replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024