Register | Sign In


Understanding through Discussion


EvC Forum active members: 64 (9164 total)
1 online now:
Newest Member: ChatGPT
Post Volume: Total: 916,902 Year: 4,159/9,624 Month: 1,030/974 Week: 357/286 Day: 0/13 Hour: 0/0


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   Is there any indication of increased intellegence over time within the Human species?
Ben!
Member (Idle past 1428 days)
Posts: 1161
From: Hayward, CA
Joined: 10-14-2004


Message 13 of 99 (232482)
08-12-2005 1:38 AM
Reply to: Message 12 by RAZD
08-11-2005 9:13 PM


Re: Which organisms have intelligence?
what is interesting about the jellyfish is whether they have the capability to store information, and if so where that occurs.
I've never seen neurons that DON'T modify their connection strengths due to experience... so that they can habituate to something like constant poking. That's gotta be the case for jellyfish.
Also, what about plants? They store information by the manner in which they grow, right? They store information about the location of the sun. That's definitely adaptive behavior. Wherever the sun is, that's the direction in which they grow. Pretty good plan in my book.
Also seems to me that nobody's addressing artificial intelligence here. DeepBlue? Robots that navigate through rooms? It's not hard to program a robot to learn and develop new behavior patterns.
Just trying to get a better understanding of your thinking.
Thanks!
Ben

This message is a reply to:
 Message 12 by RAZD, posted 08-11-2005 9:13 PM RAZD has replied

Replies to this message:
 Message 20 by RAZD, posted 08-13-2005 9:58 AM Ben! has replied

  
Ben!
Member (Idle past 1428 days)
Posts: 1161
From: Hayward, CA
Joined: 10-14-2004


Message 14 of 99 (232484)
08-12-2005 1:58 AM
Reply to: Message 8 by jar
08-09-2005 6:43 PM


Re: Partially satisfying ancestors, best taken with a pinch of salt
Jar,
I'm replying to this post, but really replying to your OP as well as this post. I hope I can pull my thoughts together enough here.
Intelligence?
This is such a tough area... and personally, I think separating knowledge and intelligence is probably a bad move. But I don't know, it's a really hard question to ask. I'm not sure the terms can be formulated well enough.
But given your definition of intelligence,
as a working assumption I would describe intellegence as the capability to imagine a new way of performing a task.
The capability to imagine a new way of performing a task is certainly dependent on your knowledge. If you don't agree, I could spell it out explicitly.
Maybe you want to ask about cognitive abilities? But all cognitive abilities are dependent on knowledge as well.
Maybe you want to ask about computational power of the brain. Actually, I think this is what you really want to know. Is the computational power of the brain dependent on (questions 1-4). I think that's a good question. Let's assume the brain is a set of interconnected neural networks; I think a mathematical definition of computational power can be derived from that.
And I think Theus did a good job addressing this in post 6.
Comparisons
I think that's a very important point. Knowledge has increased, but was the first person to chip an edge on a rock less intellegent than say, an atomic scientist?
In the way I reformulated your question (for the purpose of eliminating dependencies on knowledge), I think this is an empirical question that we just don't have the answer to. What has the evolution of the human brain looked like? Unfortunately, as they say, brains don't fossilize...
Was human success due to some increase in intellegence (once some threshold is passed) or to an increase in knowledge?
Now take the answer I just gave you, throw it away, and let me try to answer this non-scientifically.
Clearly knowledge is a big factor in our "success." A good question to ask then, is, "what is it that allows us to build and maintain our knowledge?" And, "was that dependency around when people were making arrows from stone?"
There's a smart group of people who believe that the development of is what allowed us to increase our knowledge. Traditionally, we might say that language allowed us to communicate advances in technology, thus allowing ancestors to continue developing, instead of having to start from scratch or being stuck with artifacts but no knowledge or understanding how to make them.
However, maybe that's the whole story. It's possible that language is what ALLOWS us to have abstract thought. By labeling things, we are able to abstract away details, and treat the labeled groups or labeled generalizations as simple objects themselves. In this way, we build up the ability for high-level symbolic thought.
Was this ability present in stone-chipping man? I don't know. It's been shown to be present in chimpanzees, but it doesn't look like it's used. That would seem to say it was present in a common ancestor, so it would be common in stone-chipping man.
I'm tired. I'm cutting this short to go to sleep.
Thanks,
Ben

This message is a reply to:
 Message 8 by jar, posted 08-09-2005 6:43 PM jar has not replied

Replies to this message:
 Message 15 by JavaMan, posted 08-12-2005 8:17 AM Ben! has replied

  
Ben!
Member (Idle past 1428 days)
Posts: 1161
From: Hayward, CA
Joined: 10-14-2004


Message 16 of 99 (232563)
08-12-2005 9:48 AM
Reply to: Message 15 by JavaMan
08-12-2005 8:17 AM


Distinguishing between intelligence and knowledge: bad idea?
Hi JavaMan, thanks for the reply.
I'd like to take issue with your claim that knowledge and intelligence are equivalent.
I didn't say that they're equivalent, I said that making a distinction between them is probably a "bad move." It means, I don't think it's the best distinction for modelling / explaining the data. There's a big difference between saying this and saying they're equivalent.
we could imagine the novice being more intelligent than the expert. So what do we mean by more intelligent in this case? Do we mean that the novice has more knowledge about general problem-solving strategies than the expert? Or do we mean that the hard-wiring in the novice's brain makes him innately better at solving problems than the expert?
What justification do you have to say this is due to "hard-wiring" that makes him "innately better" at solving problems? How could you differentiate this in someobody so old? There's so many confounding factors in a brain so old. And guess what? A lot of those confounding factors can be classified as some kind of "knowledge".
On the other hand, our master carpenter is equally knowledgable in his own domain, but also has a reputation of coming up with imaginative solutions to construction problems. Would we be justified in saying the master carpenter was more intelligent?
Clearly, according to jar's definition, you are justified in saying this. But the real question is... is it a good way to model the world? In other words, does it hold any explanatory value? Predictive value? Isn't that how we should judge whether or not a model is good?
From an explanatory point of view, I think slapping the label "intelligent" here just labels many possible causes, causes that aren't related. For example, it could be due to:
- knowledge in another domain, and then using skills (problem-solving techniques, modelling techniques, cognitive skills) from the other domain to help solve problems this domain.
- could be from practice, experience, or "innate" abilities to visualize.
- could be from practice, experience, or "innate" abilities related to working memory.
- could be that one has an interest in cultures and arts, and is able to apply what s/he's seen from that domain into the carpentry.
Really, this list can go on and on. There's TONS of ways this could happen.
All these things would manifest could themselves in this example you gave. What's the use of labelling them all "intelligent" ?
And because the root causes are all different, the predictions of what other things this person woudl be good at, or what things they wouldn't be good at, are all different as well. So this "intelligence" doesn't hold explanatory OR predictive powers. Why use it?
My final point is that the general intelligence of individuals isn't necessarily increased by either their own, or their culture's accumulation of knowledge in particular academic domains. The kind of problems that neolithic man had to address would leave your average atomic scientist on the verge of starvation within a few days!
And... I don't get what your point is here. The difference here is one of knowledge, not intelligence... so why do you bring it up? I don't see how it addresses your point (or, for that matter, mine). There's all sorts of knowledge out there... ok. And?
Hope this all makes some sense and we can move forward off it.
Thanks!
Ben
Edited to change subtitle. Moose, I hope you're watchin' baby
This message has been edited by Ben, Friday, 2005/08/12 06:52 AM

This message is a reply to:
 Message 15 by JavaMan, posted 08-12-2005 8:17 AM JavaMan has replied

Replies to this message:
 Message 17 by JavaMan, posted 08-12-2005 7:28 PM Ben! has replied

  
Ben!
Member (Idle past 1428 days)
Posts: 1161
From: Hayward, CA
Joined: 10-14-2004


Message 18 of 99 (232810)
08-12-2005 8:46 PM
Reply to: Message 17 by JavaMan
08-12-2005 7:28 PM


Re: Distinguishing between intelligence and knowledge: bad idea?
JavaMan,
OK, I understand you better now. Thanks for working through that. I'm going to push my previous ideas a bit more; I hope this isn't simply redundant.
I don't quite understand why you have a problem with the word 'intelligent'. Within this thread we're using it on the understanding that what we mean by it is 'problem solving abilities'.
And my exact point is, 'problem solving abilities' are so dependent on knowledge, I can't make any sense of "problem solving abilities sans knowledge." Honestly. Right now I'm studying "Distributed Cognition", introduced by Dr. Ed Hutchins at UCSD. It's all about how, at the behavioral level, you can't separate knowledge, artifacts, culture, etc. out of the analysis of problem solving. Can't. We are "living cyborgs".
Is it such a leap, then, to suggest that some element of problem-solving ability might be hard-wired and that the effectiveness of this wiring varies across the human population?
Absolutely not. I think that it's probable. But hard-wiring manifests itself at the neurological level; I think it's a real stretch to try to find a direct link to the behavioral level. I can believe in hard-wiring that allows for faster neuronal firing, faster onset of long-term potentiation, etc. But promoting any of those directly up to "problem-solving abilities" seems to be... throwing out an all-inclusive term.
And this is how I get back to predictabiltiy. At the behavioral level, I think you get very little power out of the label "intelligence". It covers such a diverse range of neural and behavioral phenomena... for example, see the list I gave before. Maybe if you give "general" intelligence tests, ones that test basically ALL POSSIBLE ways to be "more intelligent", you'll get hits; but the hits will tell you very little about the special powers of any person; just that they have "something."
It is useful because it distinguishes a particular subset of cognitive abilities, i.e. 'problem solving skills' rather than 'object recognition', say, or 'language understanding'. It provides us with a useful category label for this set of skills.
Oops... I talked abou this before reading your comment, but I think what I said above works.
I just think "intelligence" is a weak concept based on untenable premises. I think it's better to work with "intelligence" and "knowledge" together. This comes from the cognitive studies I've done here at UCSD, but also from a friend who teaches using the "interactive teaching" method, using "concepTests" instead of a classically lecture-style class. He says the classically "less intelligent" students aren't unteachable at all--in general, they just needed a different teaching style to engage them. he says intelligence is a bad way to measure a student, that things do not divide along those lines.
There's much more powerful ways to do it. Let's test based on cognitive-level tasks. Let's test based on knowledge. Heck, IQ tests even now have everything to do with knowledge. It's basically a strategy-based test. If you have experience with the strategies, you'll do great. If you don't, you get hammered. To move on... let's test using EEG and fMRI samplings.
As to explanatory and predictive power, suppose we want to investigate the question, 'How is it that some children in my class are better at solving problems than other children'. According to our 'hard-wired intelligence' model the reason why the children have differential problem solving abilities is because, to a certain extent, those skills are hard-wired and this limits their ability to learn and to apply learned knowledge. On the other hand our 'intelligence is knowledge' model suggests that the difference is due to differences in acquired knowledge.

This message is a reply to:
 Message 17 by JavaMan, posted 08-12-2005 7:28 PM JavaMan has replied

Replies to this message:
 Message 25 by JavaMan, posted 08-18-2005 8:09 AM Ben! has not replied

  
Ben!
Member (Idle past 1428 days)
Posts: 1161
From: Hayward, CA
Joined: 10-14-2004


Message 21 of 99 (233007)
08-13-2005 2:28 PM
Reply to: Message 20 by RAZD
08-13-2005 9:58 AM


Re: Which organisms have intelligence?
Hey RAZD,
I think your point is an excellent one. And I'm glad my question made you think for a bit--I've had to stop and think for days after reading some of your posts before, for sure.
As I mentioned to JavaMan in another post, I'm in a class about Distributed Cognition (DC)--a way to study cognition and mind which spreads over individuals and artifacts, both over space and time. I think your post really highlights the central concern of DC. That's cool, and I think it's a really important point. Sometimes analyzing things at the individual level gives us the wrong picture of reality, or doesn't even make sense. Anyway, I'll get back to your question in a bit.
I had original proposed that the basic elements of intelligence would be: [removed for space]
(which I had expected some comments on regarding similarity to the scientific method)
Your formulation of intelligence is very ... classical AI to me. I mean, it's very clean, engineered, and structured. Personally I'm more on the "AL" side; I believe that models that are {very integrated, use a "bag-of-tricks"-type solutions (rather than procedural and engineered), and are based in computational "tricks"} are more promising in their ability to describe reality and the interactions.
(I just finished reading "Mindware" by Andy Clark; if you're interested in why I take my position, I'd recommend checking that book out. It's kind of "history of cognitive science" stuff. Not too bad of a read, as far as time. I recommend the book because... outlining the thoughts here... I couldn't come close to getting it right. I'm just not sophisticated enough in this stuff yet.)
Anyway, I didn't want to comment on your model directly, because it doesn't fit with my thinking on many different levels. So I thought it wouldn't be at all constructive to make direct comments like that. BUT...
I thought it would be useful for both of us to discuss the results of your model. That way we can see it's strengths and weaknesses, and both of us can learn from it without getting into REALLY philosophical and high-level talks about cognitive science in general.
ANYWAY
So are we talking individuals or species or {life in general}?
My comment was focused on an individual plant, but you're right--it definitely can be reformulated in terms of species.
I guess a general comment about the classifications of your model: if you accept 3(a and b) and 4 being non-conscious (I think we can only justify humans being able to do those things consciously, and maybe we'd even fail for humans too), (and I'd suggest you really need to make 3's and 4 apply non-consciously), then I think your model is going to have to accept a lot of computational things that we wouldn't normally consider intelligent.
Seems to me anything we consider life has these properties. Many robots and programs have these properties. I might even think things like tectonic plates, rocks, flowing water... does a river have these properties? It perceives & evaluates a blockage, theorizes and tests by trying to move around an obstacle (or, if it fails, it may try to flow over it), and stores information by carving a path that shows the "easiest" path available.
... this post is getting a bit long, but I'll keep going for a bit more.
If these processes (1-5) need not be conscious, then it is only a question of behavior, and if the behavior is consistent with the processes. Dennett really holds this position firmly; that's also described in Mindware quite a bit.
Ok, I'll stop for now. By the way, I don't mean to imply that you need to read a book or anything; just that its' something that I read recently which describes many things I think are applicable to this question and your formulation of intelligence. I hope I'm not coming off the wrong way here.
Take it easy,
Ben

This message is a reply to:
 Message 20 by RAZD, posted 08-13-2005 9:58 AM RAZD has replied

Replies to this message:
 Message 22 by RAZD, posted 08-13-2005 4:45 PM Ben! has replied

  
Ben!
Member (Idle past 1428 days)
Posts: 1161
From: Hayward, CA
Joined: 10-14-2004


Message 23 of 99 (233206)
08-14-2005 5:59 PM
Reply to: Message 22 by RAZD
08-13-2005 4:45 PM


Re: Which organisms have intelligence?
I'm not sure that a I follow your model use of {Loki\Raven\Mxyzptlk} factors as necessary for them to {evolve intelligent solutions} rather than some form of random generator and selection process. Perhaps I misunderstand.
Bad explanation on my part. In class we've been talking about evolutionary solutions being like "a bag of tricks"--just using whatever means is available. That's all I meant. I agree with the way you've outlined it above.
Great, a summer reading list ...
seriously though I will look it up in the library. Any relation to Arthur C?
No idea about Arthur C, sorry. Mindware was just OK. But it's strength was definitely in explaining the shortcomings of classical AI and symbolic processing, and why embodied models, models in which behavior "emerges" (I hate that word) show promise in overcoming those shortcomings. It concludes with talking about how distributed cognition is the next step. Anyway, if you're familiar with these types of arguments, then skip the book. But if you're not familiar--then I highly recommend it.
And I don't think we can limit conscious intelligence to humans
I think this is peripheral to what we're talking about here. What I really want to know is, do you accept 3a-b and 4 to be applicable to things that are non-conscious? If you do NOT, then we need to delineate what systems are "conscious" and what systems are not.
I'll say up front that I think its a valid approach to apply 3a-b and 4 to every system, regardless of the question of "conscious" or .. maybe a better word, "intentional". These reasons come from Daniel Dennett. Anyway, I'll wait before going further, to see where you stand.
But I'm not sure that rocks and water exhibit perception or reaction (they don't choose to change direction). Water seems to only have one solution ...
I would define "perception" as "interaction with the external world."
I would define "reaction" as "changing behavior due to perception."
With these ultra-general definitions, it works.
As for water having one solution... yeah, you have to choose how far you're willing to go with hard determinism. I don't see plant or animal behavior to be fundamentally different, so I think what you say actually argues in my favor. River "behavior" is not fundamentally different than other "behavior."
Certainly for individual intelligence communication is necessary for any lasting benefit, and this fits into your {Distributed Cognition} aspect, where some people know some of the story and other know other parts. Communicated aspects kind of have to be conscious...
Well... I'm not sure if this is going to randomize us away from the thread, but I disagree. Examples of group behavior that depends on "communication" between individuals include flocking behavior, nest-building in ants, even neural networks... any system where "intelligent" group behavior derives from a (usually small) finite set of local rules.
Not sure if I'm understanding you right though. Please correct me if I'm misunderstanding on that.
Thanks!
Ben

This message is a reply to:
 Message 22 by RAZD, posted 08-13-2005 4:45 PM RAZD has replied

Replies to this message:
 Message 24 by RAZD, posted 08-14-2005 8:16 PM Ben! has not replied

  
Ben!
Member (Idle past 1428 days)
Posts: 1161
From: Hayward, CA
Joined: 10-14-2004


Message 72 of 99 (241677)
09-09-2005 2:30 AM
Reply to: Message 71 by jar
09-09-2005 12:29 AM


Re: Looks like there may be evidence.
Jar,
I'm looking into this too. It was an interesting article; what I'd really like to know is how they're linking increasing cortical mass to increasing intelligence.
There's (at least) two ways to increase brain volume; one is to increase the number of cells (in the case of this article, they're talking about increasing the number of neurons). The second way is to increase the connectivity of the brain.
It's a well-known fact in neuroscience that the brain's volume is due more to connections between neurons than the neurons themselves. I don't have the numbers in front of me, but it's not close. Furthermore, when we study cognition, the size of different brain structures or cortical areas is never the issue--functionality is always based on connectivity; connectivity within cortical areas, and connectivity between cortical areas.
Lastly, in simulations of brain-like "neural networks", increasing the number of units, or neurons, in a (fully-connected) network doesn't necessarily change the algorithm that the network uses to compute output from input. If the number of neurons in the network were insufficient to do feature extraction and make generallizations, then adding neurons is critical. But this usually isn't the case in simulations, and I dont think we have any reason to think that there are brain structures that are not working properly due to a lack of neurons.
What happens, though, when you add neurons to a network that already was able to solve a given problem, is that you lose the generalization within the network. The network has enough "capacity" to memorize. It becomes less of a pattern recognizer, and more like a digital computer. But (and I won't give arguments about this now), the power in the human brain is to do pattern recognition, to be able to work amid noise, and to make generalizations from experience to novel stimuli.
Not sure if much of that made sense.
So what I'm interested to know is:
1. Why these genes are so highly selected for (searching through online databases, looks like these genes have a LOT of functions, including proliferation of neural cells)
2. What we know about the genetic factors involved in basic connectivity between cortical areas in the brain.
Just a thought on the article. I found it interesting, but I'm really hesitant to associate directly the results of that article with human intelligence. I feel we should explore a third, causative factor between neurual ploriferation and intelligence; I would strongly suggest "connectivity" as one of those factors worth investigating.
Hope that adds some value to your thoughts. Feel free to ask away on any of it; I'll do my best to fill in the gaps.

This message is a reply to:
 Message 71 by jar, posted 09-09-2005 12:29 AM jar has not replied

  
Ben!
Member (Idle past 1428 days)
Posts: 1161
From: Hayward, CA
Joined: 10-14-2004


Message 73 of 99 (246422)
09-26-2005 12:25 AM


Paper on scaling "laws" of the mammalian cortex
Harrison, Hof, Wang. 2002. "Scaling laws in the mammalian neocortex: Does form provide clues to function?" Journal of Neurocytology 31, 289-298. (free)
This paper describes what anatomical features, both macroscopic and microscopic, are constant across mammalian species, and which vary across mammalian species. It also discusses possible and probable biological and functional constraints on the brain that lead to these tendencies.
Definitely a good read for those interested in comparative neuroscience and trying to deduce how brain architecture maps to "intelligence."
I haven't distilled the paper enough to post a summary on my own yet. Most of the information is presented within the first 3 pages, so I think it's a pretty accessible paper.

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024