Register | Sign In


Understanding through Discussion


EvC Forum active members: 64 (9164 total)
5 online now:
Newest Member: ChatGPT
Post Volume: Total: 916,783 Year: 4,040/9,624 Month: 911/974 Week: 238/286 Day: 45/109 Hour: 2/5


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   Simple evidence for ID
Ben!
Member (Idle past 1424 days)
Posts: 1161
From: Hayward, CA
Joined: 10-14-2004


Message 29 of 135 (201972)
04-24-2005 10:16 PM
Reply to: Message 9 by dsv
04-24-2005 4:15 PM


brain / computer semi-rant
I agree with your original question ("things are CREATED to serve a purpose. What is that purpose?"), but I completely disagree with your assessment of how "The mind can be expanded upon."
The brain is NOT comparable to a digital computer. If it truly were, we would have artificial minds already. You need concrete proposals for the entire system to show that brains can be improved upon, and you don't have it. You're simply hand-waving, but philosophical musinngs about "possibility" based on ignorance (i.e. the lack of a full systematic proposal) is meaningless here.

an order of magnitude higher than most estimates of the power of a human brain
Comparing numbers of instructions that can be executed is meaningless. The number of instructions needed in order to execute a program varies based on the hardware architecture. The computers you're talking about are general purpose. Brains are not. The hardware is optimized to execute the types of operations necessary for things like object recognition, memory storage, etc.--exactly the tasks it's been "designed" for.
The architecture of the hardware dictates how 'efficiently' you can program something. For example, in your system, memory is stored outside of your 'processor'--in other words, it takes cycles just to retrieve a piece of information to process it. Another example, since your system works with a single processor, you're doing everything serially. The brain works with parallel networks. In order to simulate this parallel execution, you're going to have to swap.
You can't compare hardware architecture like this. The architectures are so different, the software is not related. Executing the software on each has very different costs. There's no reason to think that your single processor system is going to be able to functionally simulate the important features of the brain (see below), let alone how many orders of magnitude of instructions would be added in order to actually program it.
Neurons are slow.
Yes, but you're looking at individual features of a fully developed system where some bad features exist because they support the good features. You can't criticize the architecture of the brain without offering a better system. Above, you simply propose a single processor. No mention of what software is running or necessary hardware features (storage, execution, resource and power supply mechanism, maintenance system), and necessary features (plasticity, performance with degradation, generalization procedures, solving the binding problem, blah blah)
Your system will have drawbacks as well and, since we're not even close to constructing a self-sustaining system with a human-like mind, it's clear your system will be far inferior to the current human one.
...
Back to the fact that neurons are slow. Neurons are slow because of the biology behind it. In order to make a faster system, you're going to have to use different materials. How many silicon brains can you make? How do you maintain them? What happens if your single processor has a problem?
We lose neurons as we age, rather than adding on additional brainpower.
I wouldn't say this is a property of creating the human mind, but of creating humans which are mortal.
If we did add on additional brainpower, we'd LOSE information. To simply add additional neurons to existing networks simply degrades the performance of the network. You'll lose memories, skills, etc. You'll become more like a child again--easier to learn things, but you know less. It would take work to "re-learn" what you already knew before adding more "brainpower."
There's other ways to add more brainpower, but given the lifespan of the organism, it doesn't make any sense. It's unnecessary. So first ask the creator why we're mortal. We have our current set of brainpower because it's more than enough for the life span that we have.
Neurons cannot be directly "reprogrammed" by our high-level consciousness.
This doesn't have any meaning to me. Why would this be good? Are you proposing that it's better to be conscious of every neuron in your brain?
The software you're talking about doesn't work as step-by-step source code. It's meaningless to "reprogram" a single neuron. Neural networks don't store high-level information locally, but rather distributed across a network.
We can't retain our memories indefinitely.
What makes you think that this is a good thing? Retaining memories indefinitely is expensive in resources. How is it useful? It's not a hard problem to solve computationally with neural networks; just produce more neurons and don't share information across them. If this was cost-effective and useful, no matter the designer (evolution, God), it would have been done.
Propose another system where you can have this kind of storage (you'll probably propose an unlimited store? How often will you have information come in through the eyes? Since retinal neurons fire what, 40 times per second, but you want to increase it, maybe you'll get up to 1GB per second of visual information to store? Of course you'll have to write a program where the hardware does no computations on the visual input, otherwise you won't have a correct record of events. That means a whole lot more cycles for memory retrieval, as every time you retrieve memory, you'll have to reprocess it from scratch in order to make any sense from it. Blah blah...)
We can't transmit thoughts from one brain to another.
That's the purpose of language. In order to transfer information directly between brains, you'd have to guarantee that the information is stored in the same data format. In order to guarantee that, you have to remove the flexibility of coding information based on your past experiences. In other words, you have to hard-code your data format instead of allowing the organism to use self-updating hardware to both code and restore information. Unless you were going to pass both the data AND the decryption device along... and what data do you pass across?
I don't see a system better than language. Maybe you can propose one?
...
All right, I'm done. I'd be happy to listen to your response to this semi-rant. Just taking some pent-up frustration about brains vs. computers out on you. I don't feel that I'm doing a good job, so you should have plenty of loose ends to pick apart here.
Ben

This message is a reply to:
 Message 9 by dsv, posted 04-24-2005 4:15 PM dsv has replied

Replies to this message:
 Message 33 by dsv, posted 04-24-2005 10:59 PM Ben! has replied
 Message 57 by Buzsaw, posted 05-15-2005 10:17 PM Ben! has not replied

  
Ben!
Member (Idle past 1424 days)
Posts: 1161
From: Hayward, CA
Joined: 10-14-2004


Message 31 of 135 (201976)
04-24-2005 10:35 PM
Reply to: Message 22 by Phat
04-24-2005 6:22 PM


Re: Nothing will be impossible for them....
'Sup PB.
Well.. if languge was a purely evolutionary development, is it in the interests of survival that nobody understands anybody else?
Evolution isn't only about what is beneficial, but also about what "is." Is it in the interests of survival to extract oxygen from air and live on land? There's a whole lot more water out there for us. Sometimes the answer is just that "that's the way the cookie crumbled."
Language changes. As covered in other posts, there is geographic and cultural isolation. When something changes, and when it's changing differentially between two groups (due to these isolations), then we get different languages.
And we actually have the skills do deal with this, something that IS important to survival. There's no known limitation for what human language a child can learn. No matter your race, you can learn the language of the language environment that you're born into.
Even today, professions have specialized language. Go to dental college and learn a whole new terminology. This is also what makes "educated" minds able to distance themselves from mere simpletons.
Language, like everything else in life, is all about efficiency. There's a purpose to specialized terms--it's labelling of commonly used concepts, in order to facilitate dialogue about them. It's the same mechanism that, say, we have number words, where some other languages have a very different type. It's not culturally useful for them to have such words, so they simply don't have them.
It's also the reason that new words are invented in our everyday life. No need to have the word "xerox" in our lexicon until recently. Or Internet. Or carbourator.
Words don't have to be complicated, however. I can say "tru-dat" and say in one catchphrase the equivalent of saying "That is very correct and very relevant to the topic at hand".
See? You just explained the mechanism I'm describing above for me.
You know, not all speakers of English know what "tru-dat" is? They'll get angry at you for introducing a new word into the lexicon. They'd say something like "we already have a way to express that. Why do we need to add a specialized word for it?"
Humans do not blindly evolve. We have intelligence and motive. It makes it much more obvious to me that a supreme intelligence and a supreme motive and plan is behind it all.
It's a mess out there. Seemingly too messy for a supreme intelligence and a supreme motive to be behind anything.
Evolution is messy. That's what's so great about it--it's purposeful and messy at the same time.
Can you give me a 'true-dat'? Holla if you hear me.

This message is a reply to:
 Message 22 by Phat, posted 04-24-2005 6:22 PM Phat has not replied

  
Ben!
Member (Idle past 1424 days)
Posts: 1161
From: Hayward, CA
Joined: 10-14-2004


Message 32 of 135 (201977)
04-24-2005 10:39 PM
Reply to: Message 30 by mick
04-24-2005 10:26 PM


Re: Nothing will be impossible for them....
LOL!!
I was going to SERIOUSLY respond to RAZD, but now I've lost all interest. it couldn't possibly hold a flame to this...

This message is a reply to:
 Message 30 by mick, posted 04-24-2005 10:26 PM mick has not replied

  
Ben!
Member (Idle past 1424 days)
Posts: 1161
From: Hayward, CA
Joined: 10-14-2004


Message 34 of 135 (202009)
04-24-2005 11:47 PM
Reply to: Message 33 by dsv
04-24-2005 10:59 PM


Re: brain / computer semi-rant
Hello Ben, nice to meet you. Very good post, I enjoyed it.
Nice to meet you too. And thanks for taking my post in stride... it's really nice to have a dialogue about thoughts. I was afraid the tone of my post might lead to discussion about egos . Thanks for letting me semi-rant and for still focusing on the actual content of the post.
Thanks for the kind words about the post, I'm glad it (overall) made sense. I'm interested in your way of thinking (trying to propose how our systems could have better functionality). In studying about the brain and mind, I find it important to understand what the biological constraints are on the systems, why those constraints are there, and what functional restrictions it leads to. It's interesting to also think about what OTHER constraints could be used instead, and what changes in functionality that would lead to.
But as you'll see below, I really don't like to think about functional changes without trying to find what biological constraints would lead to that functionality. At least that's how I see things from my studies--choosing one system or functionality necessarily sets constraints on everything else within that system.
I agree with you but for the point of this topic I was attempting [EDIT: fixed typo] to illustrate that perhaps the Intelligently Designed human is not necessarily the most optimized. Of course I realize this is easily refuted by suggested there is a divine plan and the absolute best human is not necessarily what is desired by the Creator.
Right. I understood (I think), I just disagree. What I was trying to say is that, given the world around us, and regardless of the "designer," I can't find any reasonable grounds to think there's a better system to create what we know as human. "Reasonable grounds" for me, as I'll explain again below, is based on our ability to propose a different system, identify the constraints imposed by the new system, and then compare to our existing system.
That's why I went on a semi-rant. I'm a practical guy living in a practical world. My girlfriend complains because I'm "not romantic." I don't "admit" possibility without a proposal; what I see in "lack of knowledge" is not possibility, but simply "lack of knowledge." But maybe this isn't the right way to see things?
Not limited to neurons, any cells. If we had real control over our bodies through our self-conciousness, would it not be beneficial to not have cells turn cancerous, for example?
Bringing it back to the OP, would such a "feature" not be well suited for a designer to consider?
Conceptually, of course, you're right, but I don't see it as a conceptual question at all. It's a practical question about design and, absent a design that practically implements these suggestions, I am harshly against proposing that they're "possible."
Practically speaking, the best way I can think to implement such a thing is to connect all cells (via axonal branches?) to the NCC (neural correlate for consciousness; i.e. Koch) for the cells. The NCC would be big, but maybe doable. But adding so many connections? Plus, you'd have to create a signalling mechanism for existing cells that are not neurons? Given the premium on connectivity already in place in the brain, at the surface I can't see it. Plus if your signalling mechanism fails (and the brain is based on the principle that a single neuron can fail at any time, so information normally gets distributed), then you're going to have to kill that cell. So you're going to need a robust signalling mechanism... I don't know.
I agree with you though, however I don't think we will be the ones to create the first truly human-like mind. It's much more likely that it will be created by the more primitive machine that is capable of recursive self-improvement. After all, it's hard to say what a smarter than human machine would be like if we are only human.
I completely agree. It made me happy to hear you say it.
Giving your creations the ability to communicate without geographical and thus cultural barriers would be something, in my opinion, the Creator would consider.
Well... if transmission is linguistic, then there are still cultural barriers. First of all the isolation / barriers (geographic and social) means different linguistic codes, and second of all the mere fact of different cultures means that linguistic codes are not the same. What I mean is that, the "translation problem" is not purely linguistic--it has to do with the fact that different cultures simply view the world in different ways and simply have different cultural assumptions.
I really think that to cross cultural barriers, you have to have a hard-coded, non-learnable information code. And as I described in my first post, I don't think that's a good thing.
But as far as going across geographical barriers, I think that is... cool? But again, what could be a possible mechanism? Radio frequency broadcasting... with encryption? But how to distribute the key? I guess it could be learned, transmitted through other means.
I'm not too creative (and not too many people are), so often we use plans found in other species on Earth. I don't know any that work. The problem I see is that you don't want things to be broadcast publicly. So, the designer would have to biologically solve the encryption problem. It seems like it may be possible... although all thoughts I have off the top of my head do have problems when I think about implementing them biologically...
Thanks for the response. I'll look forward to finding your thoughts about this subject and others around the board.

This message is a reply to:
 Message 33 by dsv, posted 04-24-2005 10:59 PM dsv has replied

Replies to this message:
 Message 35 by dsv, posted 04-25-2005 1:22 PM Ben! has replied
 Message 37 by RAZD, posted 04-25-2005 10:17 PM Ben! has replied
 Message 106 by Brad McFall, posted 03-08-2006 1:12 PM Ben! has not replied

  
Ben!
Member (Idle past 1424 days)
Posts: 1161
From: Hayward, CA
Joined: 10-14-2004


Message 38 of 135 (202479)
04-26-2005 12:45 AM
Reply to: Message 35 by dsv
04-25-2005 1:22 PM


Re: brain / computer semi-rant
dsv,
It was interesting to read your thoughts on intentionality and the Chinese room problem. All interesting stuff. I recently finished a series of about 15 lectures given by Chrisof Koch at Cal Tech for a class on consciousness. Great stuff. He and Francis Crick have done really interesting work on consciousness, including what I think is a solution that addresses your "intentionality" discussion. He goes with a homunculus system, where (generally speaking) the front of the brain is a homunculus that "watches" the back of the brain. Makes some really interesting physiological points, avoids the infinite regress of regular "homunculi," and sits on the same basic assumption as many people, that there is an NCC (neural correlate of consciousness), and that there's nothing "extra" needed to get a mind from a brain, just a brain.
I'm ... basically a "strong strong" AI guy. To me, the Chinese room problem is only a problem because of the way Searle presents it. I assume that "feeling of understanding" and "consciousness" are simply ("emergent") properties of the system of functions that we have. Searle cuts out a single function of a human, and asks why there's no "true understanding." Well, that's because you're missing that part of the system! In other words, you can't test for personhood by simply seeing if you can get responses to questions using language. There's a lot more in the system, it's necessary for what Searle calls "understanding", and he's simply eliminated it in the problem.
OK enough about me, let's talk about you
If thinking is conducted in a language written in the brain, then the words of this language have underived intentionality.
I think so too. However, I think I made my point poorly, because you're addressing intentionality. My point was simply that, if you have a system that is cross-cultural, it has "underived intentionality." Basically that means you're looking at a hardcoded format (underived intentionality meaning that the original system of representation is simply static and assumed). But the brain doesn't operate that way at all. There are SOME, very generally assumed things (such as, grossly Brodmann areas, and connections between them), but at no level (individual neuron, networks of neurons) is there a "hardcoded format" available. This is due to the way the brain works--it's plastic and adaptive. Learning operates wholly on experience. The representation of "dog" in one brain has little (maybe a gross description of the network architecture that it's stored within) that is in a "hardcoded format." I think the fact that the representation of "dog" is also distributed across a network, rather than available via a local representation, is a big blocker in this.
And I think this is a general problem. The 'underived intentionality' has to reside at a different 'layer' of the system than say, object recognition. Otherwise, you're going to have to be born with the ability to identify dogs... and that's a bad thing. That doesn't work in our world, where change is the norm. Our 'underived intentionality' has to reside in, say, anything having to do with consciousness of senses. Not what a dog looks like, but what it feels like to look at an object. Not what it feels like when someone rubs your finger with a brush, but what the 'touch' pathway conscously feels like when activated.
To summarize, I think that "direct" cross-cultural communication requires representation that we couldn't possibly have. In order to have it, we wouldn't be able to deal with a changing world. Hmm.. now this reminds me of the binding problem. Anyhoo.
It has been said that deep space travelers in a far advance species will likely be more technological than biological.
I hadn't heard anything like this. Do you have any online references that you can send along? I'd be interested to read a bit about it.
such is possibly the only way something could survive the time and distance
Interesting... but what if:
You take single sex cells within a probe and send them off in a ship. "Freeze em." You can set up the ship for everything to be automated--conception, development, birth, feeding, child rearing. Science-fictiony, but I don't see any road-block in it. If you can do that, and if you can have single cells survive the time and distance, then you could simply "give birth" to new organisms when the ship arrived at the destination, raise them in an 'automated' environment, and have these beings execute a mission.
Well, just brainstorming.
Awesome isn't it?
:$ Yeah
Is this biological slush box we have in our skulls really something? Maybe it's just the tip!
I think it's something. I think like RAZD says, it all depends on what you're building the mind for, and what is the assumed environment that you're putting it in, in order to accomplish that goal.
Perhaps our minds can create even greater minds that can create still greater minds and so on.
I'm not smart enough to fathom it in any fun way
Who is the creator now? Would we then be the Gods?
In exactly the same way that mom is God to a newborn. And that lasts until what, the terrible two's? Sounds like a nightmare.
P.S. Oh yeah, this post is on topic because.... because...... because.......
Right. Because by trying to dispel dsv's idea that we currently have concrete, scientifically investigated proposals of how to improve the designs of our brains, I am trying to falsify his conclusion that there's a clear lack of evidence for design. I don't think you're going to find that when looking at our minds.

This message is a reply to:
 Message 35 by dsv, posted 04-25-2005 1:22 PM dsv has not replied

  
Ben!
Member (Idle past 1424 days)
Posts: 1161
From: Hayward, CA
Joined: 10-14-2004


Message 39 of 135 (202488)
04-26-2005 1:00 AM
Reply to: Message 37 by RAZD
04-25-2005 10:17 PM


Re: brain / computer semi-rant
It depends on what you are designing for as well.
I think so too. Although, given the type of world we live in, I don't know what it would mean to have "perfection out of the box." Unless you change the parameters of the laws in our universe, I'm not sure what kind of "perfection" is available.
Of course the problem then is communication between cultural groups so that overall knowledge is increased and there is a feedback into other problems from all those diverse perspectives.
Ignoring the fact that I'm still unconvinced that knowledge is necessarily a good thing, I agree with your analysis.
I also wonder if slowing neurons down makes for a better product biologically, due to the limits in the way information could be stored, vesus a faster delivery system than storage could process. This gets into the biological limitations on storage systems.
What kinds of limitations do you have in mind? I'm interested to hear a bit more about it.
In my view, the big advantage of using the slow neurons is that it allows the software and hardware are integrated in brains. The storage system is not differentiable from the execution system (I'm almost telling the truth). This architecture is so different from our current computers, and it gives biology such an efficiency advantage in processing.
P.S. This is related to the original topic because... we're still discussing possible improvements to minds. If we can find some good ones, then maybe we can put some constraints on what kind of ID was done on us by excluding some types of ID.

This message is a reply to:
 Message 37 by RAZD, posted 04-25-2005 10:17 PM RAZD has replied

Replies to this message:
 Message 40 by RAZD, posted 04-26-2005 11:19 PM Ben! has not replied
 Message 43 by RAZD, posted 04-28-2005 10:44 PM Ben! has not replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024