Register | Sign In


Understanding through Discussion


EvC Forum active members: 64 (9164 total)
3 online now:
Newest Member: ChatGPT
Post Volume: Total: 916,803 Year: 4,060/9,624 Month: 931/974 Week: 258/286 Day: 19/46 Hour: 0/1


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   Simple evidence for ID
Ben!
Member (Idle past 1425 days)
Posts: 1161
From: Hayward, CA
Joined: 10-14-2004


Message 31 of 135 (201976)
04-24-2005 10:35 PM
Reply to: Message 22 by Phat
04-24-2005 6:22 PM


Re: Nothing will be impossible for them....
'Sup PB.
Well.. if languge was a purely evolutionary development, is it in the interests of survival that nobody understands anybody else?
Evolution isn't only about what is beneficial, but also about what "is." Is it in the interests of survival to extract oxygen from air and live on land? There's a whole lot more water out there for us. Sometimes the answer is just that "that's the way the cookie crumbled."
Language changes. As covered in other posts, there is geographic and cultural isolation. When something changes, and when it's changing differentially between two groups (due to these isolations), then we get different languages.
And we actually have the skills do deal with this, something that IS important to survival. There's no known limitation for what human language a child can learn. No matter your race, you can learn the language of the language environment that you're born into.
Even today, professions have specialized language. Go to dental college and learn a whole new terminology. This is also what makes "educated" minds able to distance themselves from mere simpletons.
Language, like everything else in life, is all about efficiency. There's a purpose to specialized terms--it's labelling of commonly used concepts, in order to facilitate dialogue about them. It's the same mechanism that, say, we have number words, where some other languages have a very different type. It's not culturally useful for them to have such words, so they simply don't have them.
It's also the reason that new words are invented in our everyday life. No need to have the word "xerox" in our lexicon until recently. Or Internet. Or carbourator.
Words don't have to be complicated, however. I can say "tru-dat" and say in one catchphrase the equivalent of saying "That is very correct and very relevant to the topic at hand".
See? You just explained the mechanism I'm describing above for me.
You know, not all speakers of English know what "tru-dat" is? They'll get angry at you for introducing a new word into the lexicon. They'd say something like "we already have a way to express that. Why do we need to add a specialized word for it?"
Humans do not blindly evolve. We have intelligence and motive. It makes it much more obvious to me that a supreme intelligence and a supreme motive and plan is behind it all.
It's a mess out there. Seemingly too messy for a supreme intelligence and a supreme motive to be behind anything.
Evolution is messy. That's what's so great about it--it's purposeful and messy at the same time.
Can you give me a 'true-dat'? Holla if you hear me.

This message is a reply to:
 Message 22 by Phat, posted 04-24-2005 6:22 PM Phat has not replied

  
Ben!
Member (Idle past 1425 days)
Posts: 1161
From: Hayward, CA
Joined: 10-14-2004


Message 32 of 135 (201977)
04-24-2005 10:39 PM
Reply to: Message 30 by mick
04-24-2005 10:26 PM


Re: Nothing will be impossible for them....
LOL!!
I was going to SERIOUSLY respond to RAZD, but now I've lost all interest. it couldn't possibly hold a flame to this...

This message is a reply to:
 Message 30 by mick, posted 04-24-2005 10:26 PM mick has not replied

  
dsv
Member (Idle past 4751 days)
Posts: 220
From: Secret Underground Hideout
Joined: 08-17-2004


Message 33 of 135 (201987)
04-24-2005 10:59 PM
Reply to: Message 29 by Ben!
04-24-2005 10:16 PM


Re: brain / computer semi-rant
Hello Ben, nice to meet you. Very good post, I enjoyed it.
Ben writes:
Comparing numbers of instructions that can be executed is meaningless.
I agree with you but for the point of this topic I was attempted to illustrate that perhaps the Intelligently Designed human is not necessarily the most optimized. Of course I realize this is easily refuted by suggesting there is a divine plan and the absolute best human is not necessarily what is desired by the Creator.
However if we're looking at it from an evolutionary standpoint we can realize that the current state of organisms is not the final product but is simply in a never-ending process of change (presumably improvement).
I agree with pretty much all of your points on architecture. It's my primary area of study and I believe you're fairly spot on in your opinions. Since technological advancement isn't really the focus of the thread I'll refrain from having a long discussion about the philosophies of futurism (although that would be fun sometime!) but I want to respond to a couple things.
Ben writes:
Your system will have drawbacks as well and, since we're not even close to constructing a self-sustaining system with a human-like mind, it's clear your system will be far inferior to the current human one.
That depends on what your definition of "not even close" is. A few centuries? A decade? Maybe longer? Considering our model for the future breaks down once we reach that point, it's hard to judge accurately.
I agree with you though, however I don't think we will be the ones to create the first truly human-like mind. It's much more likely that it will be created by the more primitive machine that is capable of recursive self-improvement. After all, it's hard to say what a smarter than human machine would be like if we are only human.
A created smarter machine is not just better architecture, it's vast and "deep" -- and problematic, I suppose.
dsv writes:
Neurons cannot be directly "reprogrammed" by our high-level consciousness.
Ben writes:
This doesn't have any meaning to me. Why would this be good? Are you proposing that it's better to be conscious of every neuron in your brain?
Not limited to neurons, any cells. If we had real control over our bodies through our self-conciousness, would it not be beneficial to not have cells turn cancerous, for example?
Bringing it back to the OP, would such a "feature" not be well suited for a designer to consider?
dsv writes:
We can't transmit thoughts from one brain to another.
ben writes:
That's the purpose of language. In order to transfer information directly between brains, you'd have to guarantee that the information is stored in the same data format.
You're thinking in terms of a machine in today's world. Code is code, that's correct. However, what is to say that there could still be interpretation such as there is with language? Direct mind communication maybe not be spoken but it can still be linguistic. Giving your creations the ability to communicate without geographical and thus cultural barriers would be something, in my opinion, the Creator would consider.
I hope I've helped to make my points somewhat clear, and thanks for your reply and insightful post.
[EDIT: Fixed some typos, there's probably more, very tired < !--UB -->
This message has been edited by dsv, Sunday, April 24, 2005 10:01 PM

This message is a reply to:
 Message 29 by Ben!, posted 04-24-2005 10:16 PM Ben! has replied

Replies to this message:
 Message 34 by Ben!, posted 04-24-2005 11:47 PM dsv has replied

  
Ben!
Member (Idle past 1425 days)
Posts: 1161
From: Hayward, CA
Joined: 10-14-2004


Message 34 of 135 (202009)
04-24-2005 11:47 PM
Reply to: Message 33 by dsv
04-24-2005 10:59 PM


Re: brain / computer semi-rant
Hello Ben, nice to meet you. Very good post, I enjoyed it.
Nice to meet you too. And thanks for taking my post in stride... it's really nice to have a dialogue about thoughts. I was afraid the tone of my post might lead to discussion about egos . Thanks for letting me semi-rant and for still focusing on the actual content of the post.
Thanks for the kind words about the post, I'm glad it (overall) made sense. I'm interested in your way of thinking (trying to propose how our systems could have better functionality). In studying about the brain and mind, I find it important to understand what the biological constraints are on the systems, why those constraints are there, and what functional restrictions it leads to. It's interesting to also think about what OTHER constraints could be used instead, and what changes in functionality that would lead to.
But as you'll see below, I really don't like to think about functional changes without trying to find what biological constraints would lead to that functionality. At least that's how I see things from my studies--choosing one system or functionality necessarily sets constraints on everything else within that system.
I agree with you but for the point of this topic I was attempting [EDIT: fixed typo] to illustrate that perhaps the Intelligently Designed human is not necessarily the most optimized. Of course I realize this is easily refuted by suggested there is a divine plan and the absolute best human is not necessarily what is desired by the Creator.
Right. I understood (I think), I just disagree. What I was trying to say is that, given the world around us, and regardless of the "designer," I can't find any reasonable grounds to think there's a better system to create what we know as human. "Reasonable grounds" for me, as I'll explain again below, is based on our ability to propose a different system, identify the constraints imposed by the new system, and then compare to our existing system.
That's why I went on a semi-rant. I'm a practical guy living in a practical world. My girlfriend complains because I'm "not romantic." I don't "admit" possibility without a proposal; what I see in "lack of knowledge" is not possibility, but simply "lack of knowledge." But maybe this isn't the right way to see things?
Not limited to neurons, any cells. If we had real control over our bodies through our self-conciousness, would it not be beneficial to not have cells turn cancerous, for example?
Bringing it back to the OP, would such a "feature" not be well suited for a designer to consider?
Conceptually, of course, you're right, but I don't see it as a conceptual question at all. It's a practical question about design and, absent a design that practically implements these suggestions, I am harshly against proposing that they're "possible."
Practically speaking, the best way I can think to implement such a thing is to connect all cells (via axonal branches?) to the NCC (neural correlate for consciousness; i.e. Koch) for the cells. The NCC would be big, but maybe doable. But adding so many connections? Plus, you'd have to create a signalling mechanism for existing cells that are not neurons? Given the premium on connectivity already in place in the brain, at the surface I can't see it. Plus if your signalling mechanism fails (and the brain is based on the principle that a single neuron can fail at any time, so information normally gets distributed), then you're going to have to kill that cell. So you're going to need a robust signalling mechanism... I don't know.
I agree with you though, however I don't think we will be the ones to create the first truly human-like mind. It's much more likely that it will be created by the more primitive machine that is capable of recursive self-improvement. After all, it's hard to say what a smarter than human machine would be like if we are only human.
I completely agree. It made me happy to hear you say it.
Giving your creations the ability to communicate without geographical and thus cultural barriers would be something, in my opinion, the Creator would consider.
Well... if transmission is linguistic, then there are still cultural barriers. First of all the isolation / barriers (geographic and social) means different linguistic codes, and second of all the mere fact of different cultures means that linguistic codes are not the same. What I mean is that, the "translation problem" is not purely linguistic--it has to do with the fact that different cultures simply view the world in different ways and simply have different cultural assumptions.
I really think that to cross cultural barriers, you have to have a hard-coded, non-learnable information code. And as I described in my first post, I don't think that's a good thing.
But as far as going across geographical barriers, I think that is... cool? But again, what could be a possible mechanism? Radio frequency broadcasting... with encryption? But how to distribute the key? I guess it could be learned, transmitted through other means.
I'm not too creative (and not too many people are), so often we use plans found in other species on Earth. I don't know any that work. The problem I see is that you don't want things to be broadcast publicly. So, the designer would have to biologically solve the encryption problem. It seems like it may be possible... although all thoughts I have off the top of my head do have problems when I think about implementing them biologically...
Thanks for the response. I'll look forward to finding your thoughts about this subject and others around the board.

This message is a reply to:
 Message 33 by dsv, posted 04-24-2005 10:59 PM dsv has replied

Replies to this message:
 Message 35 by dsv, posted 04-25-2005 1:22 PM Ben! has replied
 Message 37 by RAZD, posted 04-25-2005 10:17 PM Ben! has replied
 Message 106 by Brad McFall, posted 03-08-2006 1:12 PM Ben! has not replied

  
dsv
Member (Idle past 4751 days)
Posts: 220
From: Secret Underground Hideout
Joined: 08-17-2004


Message 35 of 135 (202219)
04-25-2005 1:22 PM
Reply to: Message 34 by Ben!
04-24-2005 11:47 PM


Re: brain / computer semi-rant
Ben writes:
I really think that to cross cultural barriers, you have to have a hard-coded, non-learnable information code. And as I described in my first post, I don't think that's a good thing.
I agree with you to some degree. I think it's possible to have what begins as a hard-coded structure evolve into a self-concious being of interpretation through self-improvement. We may not have the ability but who is to say an entity that is smarter than human wouldn't? Are you familiar with the Chinese Room Argument? (related)
One of the main problems with higher-than-language communication, as I'm sure you know, is representative awareness (referring to "something"). Intentionality, something possessed by words and mental states of being "about" -- representing, referring to -- something.
An example used in the argument, the belief that Fido is furry is a mental state that is about Fido. The word "Fido" refers to the dog named... Fido. Don't confuse intentionality with intending something -- the latter is just another example (along with believing and desiring) of an intentional mental state. Something has "derived" intentionality just in case it has intentionality in virtue of the intentionality of something else. Plausibly, "dog" refers to dogs in virtue of the beliefs, intentions, etc., of English speakers -- hence "dog" has derived intentionality. My belief that dogs have fur is an intentional mental state, and doesn't have its intentionality in virtue of the intentionality of anything else -- hence my belief has underived (or original) intentionality. If thinking is conducted in a language written in the brain, then the words of this language have underived intentionality.
The Chinese Room argument is directed against the claim that instantiating AI code is sufficient for underived intentionality.
Weak AI; the principle value of the computer in the study of the mind is that it gives us a very powerful tool -- e.g. it enables us to simulate various kinds of mental processes. (standard)
Strong AI; an appropriately programmed computer literally has cognitive states. (disputable)
There are also two kinds of "strong" AI, "strong strong" and "weak strong." Strong Strong is a computer program (i.e. an algorithm for manipulating symbols) such that any (possible) computer running this program literally has cognitive states. Weak Strong is a computer program such that any (possible) computer running this program and embedded in the world in certain ways (e.g. certain causal connections hold between its internal states and states of its environment) literally has cognitive states.
Some doubt "strong strong" AI on the grounds that nothing could make a symbol (in an AI machine) refer to a thing if the computer has forever been floating off in deep space, causally isolated from the thing. As for "weak strong", there is a large literature on what sorts of connections between symbols and other parts of the world would suffice to give those symbols underived intentionality.

We have been sending symbols off into deep space for a long time now. It has been said that deep space travelers in a far advance species will likely be more technological than biological, such is possibly the only way something could survive the time and distance. How would that code(?) "compute" our symbols?
Awesome isn't it?
It is clear that our minds are an astounding biological miracle (I dare say!) but we are using our human minds to conceive what the perfect mind is. That doesn't work, we have to push, HARD. Is this biological slush box we have in our skulls really something? Maybe it's just the tip! Perhaps our minds can create even greater minds that can create still greater minds and so on.
Who is the creator now? Would we then be the Gods?

This message is a reply to:
 Message 34 by Ben!, posted 04-24-2005 11:47 PM Ben! has replied

Replies to this message:
 Message 38 by Ben!, posted 04-26-2005 12:45 AM dsv has not replied

  
RAZD
Member (Idle past 1432 days)
Posts: 20714
From: the other end of the sidewalk
Joined: 03-14-2004


Message 36 of 135 (202429)
04-25-2005 9:53 PM
Reply to: Message 30 by mick
04-24-2005 10:26 PM


Re: Nothing will be impossible for them....
Unfortunately it's now evolved to the point that people who still say "your momma is so fat" now get censored in spite of it actually being true. So much for anachronistic pre-emptive strikes.

This message is a reply to:
 Message 30 by mick, posted 04-24-2005 10:26 PM mick has not replied

  
RAZD
Member (Idle past 1432 days)
Posts: 20714
From: the other end of the sidewalk
Joined: 03-14-2004


Message 37 of 135 (202432)
04-25-2005 10:17 PM
Reply to: Message 34 by Ben!
04-24-2005 11:47 PM


Re: brain / computer semi-rant
ben, msg 34 writes:
What I was trying to say is that, given the world around us, and regardless of the "designer," I can't find any reasonable grounds to think there's a better system to create what we know as human.
It depends on what you are designing for as well. If you are trying for perfection out of the box, then it would be reasonable to think that the best effort was put forward and the results would show this. If on the other hand you are trying for a maximum diversity of solutions, then a system that has "happy accidents" built into it has a higher probability of original solutions.
Well... if transmission is linguistic, then there are still cultural barriers. First of all the isolation / barriers (geographic and social) means different linguistic codes, and second of all the mere fact of different cultures means that linguistic codes are not the same. What I mean is that, the "translation problem" is not purely linguistic--it has to do with the fact that different cultures simply view the world in different ways and simply have different cultural assumptions.
Those same cultural differences also equate to different perspectives in approaching a problem, and a maximum diversity in culture would mean a maximum probability of original solutions.
Of course the problem then is communication between cultural groups so that overall knowledge is increased and there is a feedback into other problems from all those diverse perspectives.
I also wonder if slowing neurons down makes for a better product biologically, due to the limits in the way information could be stored, vesus a faster delivery system than storage could process. This gets into the biological limitations on storage systems.

we are limited in our ability to understand
by our ability to understand
RebelAAmerican.Zen[Deist
{{{Buddha walks off laughing with joy}}}

This message is a reply to:
 Message 34 by Ben!, posted 04-24-2005 11:47 PM Ben! has replied

Replies to this message:
 Message 39 by Ben!, posted 04-26-2005 1:00 AM RAZD has replied

  
Ben!
Member (Idle past 1425 days)
Posts: 1161
From: Hayward, CA
Joined: 10-14-2004


Message 38 of 135 (202479)
04-26-2005 12:45 AM
Reply to: Message 35 by dsv
04-25-2005 1:22 PM


Re: brain / computer semi-rant
dsv,
It was interesting to read your thoughts on intentionality and the Chinese room problem. All interesting stuff. I recently finished a series of about 15 lectures given by Chrisof Koch at Cal Tech for a class on consciousness. Great stuff. He and Francis Crick have done really interesting work on consciousness, including what I think is a solution that addresses your "intentionality" discussion. He goes with a homunculus system, where (generally speaking) the front of the brain is a homunculus that "watches" the back of the brain. Makes some really interesting physiological points, avoids the infinite regress of regular "homunculi," and sits on the same basic assumption as many people, that there is an NCC (neural correlate of consciousness), and that there's nothing "extra" needed to get a mind from a brain, just a brain.
I'm ... basically a "strong strong" AI guy. To me, the Chinese room problem is only a problem because of the way Searle presents it. I assume that "feeling of understanding" and "consciousness" are simply ("emergent") properties of the system of functions that we have. Searle cuts out a single function of a human, and asks why there's no "true understanding." Well, that's because you're missing that part of the system! In other words, you can't test for personhood by simply seeing if you can get responses to questions using language. There's a lot more in the system, it's necessary for what Searle calls "understanding", and he's simply eliminated it in the problem.
OK enough about me, let's talk about you
If thinking is conducted in a language written in the brain, then the words of this language have underived intentionality.
I think so too. However, I think I made my point poorly, because you're addressing intentionality. My point was simply that, if you have a system that is cross-cultural, it has "underived intentionality." Basically that means you're looking at a hardcoded format (underived intentionality meaning that the original system of representation is simply static and assumed). But the brain doesn't operate that way at all. There are SOME, very generally assumed things (such as, grossly Brodmann areas, and connections between them), but at no level (individual neuron, networks of neurons) is there a "hardcoded format" available. This is due to the way the brain works--it's plastic and adaptive. Learning operates wholly on experience. The representation of "dog" in one brain has little (maybe a gross description of the network architecture that it's stored within) that is in a "hardcoded format." I think the fact that the representation of "dog" is also distributed across a network, rather than available via a local representation, is a big blocker in this.
And I think this is a general problem. The 'underived intentionality' has to reside at a different 'layer' of the system than say, object recognition. Otherwise, you're going to have to be born with the ability to identify dogs... and that's a bad thing. That doesn't work in our world, where change is the norm. Our 'underived intentionality' has to reside in, say, anything having to do with consciousness of senses. Not what a dog looks like, but what it feels like to look at an object. Not what it feels like when someone rubs your finger with a brush, but what the 'touch' pathway conscously feels like when activated.
To summarize, I think that "direct" cross-cultural communication requires representation that we couldn't possibly have. In order to have it, we wouldn't be able to deal with a changing world. Hmm.. now this reminds me of the binding problem. Anyhoo.
It has been said that deep space travelers in a far advance species will likely be more technological than biological.
I hadn't heard anything like this. Do you have any online references that you can send along? I'd be interested to read a bit about it.
such is possibly the only way something could survive the time and distance
Interesting... but what if:
You take single sex cells within a probe and send them off in a ship. "Freeze em." You can set up the ship for everything to be automated--conception, development, birth, feeding, child rearing. Science-fictiony, but I don't see any road-block in it. If you can do that, and if you can have single cells survive the time and distance, then you could simply "give birth" to new organisms when the ship arrived at the destination, raise them in an 'automated' environment, and have these beings execute a mission.
Well, just brainstorming.
Awesome isn't it?
:$ Yeah
Is this biological slush box we have in our skulls really something? Maybe it's just the tip!
I think it's something. I think like RAZD says, it all depends on what you're building the mind for, and what is the assumed environment that you're putting it in, in order to accomplish that goal.
Perhaps our minds can create even greater minds that can create still greater minds and so on.
I'm not smart enough to fathom it in any fun way
Who is the creator now? Would we then be the Gods?
In exactly the same way that mom is God to a newborn. And that lasts until what, the terrible two's? Sounds like a nightmare.
P.S. Oh yeah, this post is on topic because.... because...... because.......
Right. Because by trying to dispel dsv's idea that we currently have concrete, scientifically investigated proposals of how to improve the designs of our brains, I am trying to falsify his conclusion that there's a clear lack of evidence for design. I don't think you're going to find that when looking at our minds.

This message is a reply to:
 Message 35 by dsv, posted 04-25-2005 1:22 PM dsv has not replied

  
Ben!
Member (Idle past 1425 days)
Posts: 1161
From: Hayward, CA
Joined: 10-14-2004


Message 39 of 135 (202488)
04-26-2005 1:00 AM
Reply to: Message 37 by RAZD
04-25-2005 10:17 PM


Re: brain / computer semi-rant
It depends on what you are designing for as well.
I think so too. Although, given the type of world we live in, I don't know what it would mean to have "perfection out of the box." Unless you change the parameters of the laws in our universe, I'm not sure what kind of "perfection" is available.
Of course the problem then is communication between cultural groups so that overall knowledge is increased and there is a feedback into other problems from all those diverse perspectives.
Ignoring the fact that I'm still unconvinced that knowledge is necessarily a good thing, I agree with your analysis.
I also wonder if slowing neurons down makes for a better product biologically, due to the limits in the way information could be stored, vesus a faster delivery system than storage could process. This gets into the biological limitations on storage systems.
What kinds of limitations do you have in mind? I'm interested to hear a bit more about it.
In my view, the big advantage of using the slow neurons is that it allows the software and hardware are integrated in brains. The storage system is not differentiable from the execution system (I'm almost telling the truth). This architecture is so different from our current computers, and it gives biology such an efficiency advantage in processing.
P.S. This is related to the original topic because... we're still discussing possible improvements to minds. If we can find some good ones, then maybe we can put some constraints on what kind of ID was done on us by excluding some types of ID.

This message is a reply to:
 Message 37 by RAZD, posted 04-25-2005 10:17 PM RAZD has replied

Replies to this message:
 Message 40 by RAZD, posted 04-26-2005 11:19 PM Ben! has not replied
 Message 43 by RAZD, posted 04-28-2005 10:44 PM Ben! has not replied

  
RAZD
Member (Idle past 1432 days)
Posts: 20714
From: the other end of the sidewalk
Joined: 03-14-2004


Message 40 of 135 (202870)
04-26-2005 11:19 PM
Reply to: Message 39 by Ben!
04-26-2005 1:00 AM


Re: brain / computer semi-rant
tommorrow. my brain needs some intelligent redesign to cope with fatigue ....

This message is a reply to:
 Message 39 by Ben!, posted 04-26-2005 1:00 AM Ben! has not replied

  
Jerry Don Bauer
Inactive Member


Message 41 of 135 (203247)
04-28-2005 4:20 AM
Reply to: Message 6 by Trump won
04-24-2005 4:04 PM


Chris:
Has any of your teachers shown you how to do an outline? Looking at your posts, I'll bet it would help you to formulate your thoughts better if you first formatted them into an outline:
MY ARGUMENT SUBJECT:
1)Point 1 of my argument.
a)Expand on point one.
b)Different expansion on point 1.
2)Point 2 of my argument. Etc.
You ought to try that! Jerry

Design Dynamics

This message is a reply to:
 Message 6 by Trump won, posted 04-24-2005 4:04 PM Trump won has not replied

  
coffee_addict
Member (Idle past 504 days)
Posts: 3645
From: Indianapolis, IN
Joined: 03-29-2004


Message 42 of 135 (203306)
04-28-2005 10:16 AM
Reply to: Message 28 by mick
04-24-2005 8:28 PM


mick writes:
when your classmate dies of heart disease or cancer, he realise that human beings are not perfect after all.
Oops, I missed this post.
Haven't you ever heard of the fall?

This message is a reply to:
 Message 28 by mick, posted 04-24-2005 8:28 PM mick has not replied

  
RAZD
Member (Idle past 1432 days)
Posts: 20714
From: the other end of the sidewalk
Joined: 03-14-2004


Message 43 of 135 (203521)
04-28-2005 10:44 PM
Reply to: Message 39 by Ben!
04-26-2005 1:00 AM


Re: brain / computer semi-rant
sorry for the delay.
Ben, msg 39 writes:
I don't know what it would mean to have "perfection out of the box." Unless you change the parameters of the laws in our universe,
I think we can agree that the self evident fundamental lack of perfection in this universe would indicate that this design approach was not used here.
What kinds of limitations do you have in mind? I'm interested to hear a bit more about it
I've heard it said that a thought takes about 2 seconds, but whether this is true or just indicative is irrelevent to the fact that thought takes significantly more time in a brain than in a computer.
This may be an evolved way of coping with {biological systems\organic reactions} but it may also be the way that our brain associates information: they are time linked, {blurred together\superimposed} by a slower processing system.
The result could be a number of similar information bits that include linked information --
{hat\feathers}
{hat\ribbons}
{hat\flaps}
{hat\colors}

etc, and that this leads to {cognitive\conceptual\creative} links between say ribbons and flaps.
P.S. This is related to the original topic because...
it's about the obvious perfection of the biological system ...
(There was a topic? )

we are limited in our ability to understand
by our ability to understand
RebelAAmerican.Zen[Deist
{{{Buddha walks off laughing with joy}}}

This message is a reply to:
 Message 39 by Ben!, posted 04-26-2005 1:00 AM Ben! has not replied

  
Phat
Member
Posts: 18338
From: Denver,Colorado USA
Joined: 12-30-2003
Member Rating: 1.0


Message 44 of 135 (208014)
05-14-2005 9:27 AM
Reply to: Message 23 by CK
04-24-2005 6:37 PM


Re: Nothing will be impossible for them....
quote:
I said:
Well.. if languge was a purely evolutionary development, is it in the interests of survival that nobody understands anybody else?
CK writes:
I have no idea what this is suppose to mean - it makes no sense at all.
I mean that it seems rather strange that language is different all over the world...not just by a little but by a lot. Would not the terms of evolution allow for the most efficient mode of communication within the species regardless of where they lived? (Within a range of similarity)

This message is a reply to:
 Message 23 by CK, posted 04-24-2005 6:37 PM CK has not replied

Replies to this message:
 Message 45 by jar, posted 05-14-2005 9:42 AM Phat has replied

  
jar
Member (Idle past 421 days)
Posts: 34026
From: Texas!!
Joined: 04-20-2004


Message 45 of 135 (208021)
05-14-2005 9:42 AM
Reply to: Message 44 by Phat
05-14-2005 9:27 AM


Re: Nothing will be impossible for them....
No, of course not. Why would language be any different thanany other facet such as vision, coloration, locomotion, hearing, touch, or other things?
Let me ask you a question?
Why would language develop?

Aslan is not a Tame Lion

This message is a reply to:
 Message 44 by Phat, posted 05-14-2005 9:27 AM Phat has replied

Replies to this message:
 Message 46 by Phat, posted 05-14-2005 11:39 AM jar has replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024