Register | Sign In


Understanding through Discussion


EvC Forum active members: 64 (9164 total)
4 online now:
Newest Member: ChatGPT
Post Volume: Total: 916,857 Year: 4,114/9,624 Month: 985/974 Week: 312/286 Day: 33/40 Hour: 5/2


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   What's in a Word?
Jon
Inactive Member


Message 1 of 13 (545164)
02-01-2010 11:39 PM


... and where are they?
Anyone who has given much thought to the linguistic process will be amazed at its most prominent feature: meaning. When we consider the nature of meaning and try to discern how it is that symbologies link (mentally) with our understandings of reality (i.e., how X represents Z), the following questions, amongst others, come to mind:
What is a word?
What does a word mean?
How does a word mean?
What does it mean for a word to mean?
Where are words?
Where do they come from?
Where does their meaning come from?
I have asked myself these questions and many others many many times, and I must say, the search for a definitive answer can be exhausting. Nevertheless, with some pushing and careful thought, I have come up with a model.
Other data and theoretical studies indicate that memory and associative learning are best modeled using a distributed rather than locationist neural model. (Lieberman 1984)
Using this concept, some aspects of Government-Binding Theory, some from HDPSG, and some other concepts stolen from the realm of feature-based Phonology, I came up with a model of the word similar to this:
Essentially, a word exists as a feature-based matrix of associated words. Of course, features may be words and words features. What permits our understanding then of a term is how it relates to other terms, which also relate to one another, all of which is done in a specific way, each term being networked uniquely and no two terms being networked identically, though they may be networked similarly (as we shall soon see, this similarity in networking is essential for semantics).
There are two reasons that this understanding should be preferred (they are similar, but I will distinguish them):
  • First, it is multi-complex - no single node holds a single word; instead words rest as relationships and network/node mapping. Thus, a night of crazy drunkenness in which nodes (cells) are killed off doesn't result in the loss of words in the system, only in part of a word.
  • This brings around the second reason to prefer this model of a word, which is redundancy - there are several ways to get from one node to another, killing of a certain node and its links will never undo the whole network.
With this model, we can explain the meaning behind every word as a relationship with other words; indeed, a very concrete example of this notion is the Dictionary - each word is defined using others, no word in the language cannot be described with other words in the language. Knowing that such a model for word meaning is possible and employed regularly in the Mind, we should be hesitant to propose any other model without good reason to (a) assume the illusory nature of the first, i.e., assume the first not to be real, or (b) assume the existence of two or more systems to accomplish what can be done with just one, i.e., violate the principle of Occam's Razor. Lacking evidence for holding (a) or (b), the notion of word meaning being rooted in feature/word relationships seems settled. Now, though, to explain how we extend these notions to concepts of Reality, i.e., how a group of features can be seen as representational of a concept (as we will see, no words represent Reality).
The linguistic sign unites, not a thing and a name, but a concept and a sound-image. (Saussure 1959)
As far as I can tell, words have no relationships to the things they represent, i.e., our words and the relation we create between them and the Real world is neither binding nor necessary, hence its arbitrariness. I find this an acceptable conclusion for a couple of reasons: (1) the meaning of words change and different words have different referents in different languages, (2) Reality is a disconnected existent, as concerns the Mind (where words are), and this disconnect makes a lack of relationship inevitable. This brings us to the quote above from Saussure, the part on which I want to focus first being the notion of thing versus concept. For thing I want to use the term Reality, which shall reference the actual physical, unknowable, though postulated-to-exist entity of being which may only be grasped - though never obtained - as sense, or a group of sense, what I shall call Concept. I may look at the can and otherwise sense it, but to experience the can itself is impossible; it is a metal chunk of crap on my desk which I have no intention of implanting in my Mind, and even if it were to be put through my brain, one is skeptical to presume that such an act would implant it on my Mind. Thus, the can is Reality, but this is unthinkable, so what I think when I think can (whether stimulated by the actual presence or not of the can) certainly cannot be the can, but something else which is in my mind - this is what I call a Concept. This is the main point I wanted to draw from Saussure.
Next, we must come up with a way of understanding how a Concept is stored, and here is where I think our understanding of Concept as a set of senses is important. But, before we go that far, I think I must make an argument for what the Mind is, which I will do by linking folk to the following thread at a different forum: Immortality of the Mind. I am going to copy-paste the essentials of the argument from the OP on that forum into this message, because I think the concept is too central to the rest of my explanation to ignore (UTF-8 encoding recommended; if you want to argue any of the points in this copy-paste, please, start a new thread):
quote:
1. The Brain
Definition
What is the brain? The brain-system is a large set of nodesneuronsthat can communicate with one another. Not all nodes can communicate with all other nodes, no. Some nodes communicate with only certain other nodes. The communication takes place over the synapse, which is a space between the one node and the other node. The exchange takes place with chemicals that represent the 'information' being transmitted. If there were no receiving node, then the synapse would be infinite, and there could be no message pass, which would render node-communication impossible, and therefore prevent brain function.
Conclusion I
So, at the base of system (ſ ) of the brain-function system lies:
1) Sending nodes (neuron)
2) Receiving nodes (neuron)
3) Medium by which (1) & (2) communicate (synapse)
This system is then repeated many times to create a highly sophisticated network of these nodes communicating with one anothernote, some nodes communicate with more than one other node.
2. Mind as a Function of ſ(brain)
Definitions
quote:
Dictionary.com
mind /maɪnd/
—noun
1. (in a human or other conscious being) the element, part, substance, or process that reasons, thinks, feels, wills, perceives, judges, etc.....
We know that: 'to reason' = action of the brain, 'to think' = action of the brain, etc; in other words, the things that comprise mind are all actions (α ) of brain. Therefore, we say:
Premise 1
mind = α(brain)
_____
The function of anything being that which it does, then anything that something does is a function (‘) of that thing. For example, say a particular leg, leg-n, runs, walks, supports; we could say that ‘(leg-n) = run, walk, support. But, leg-n also kicks, and does numerous other things; in fact, anything that the leg-n does becomes its function:
‘(leg-n) = α(leg-n)
In terms of a formula, then, we have the following:
Premise 2
‘(x) = α(x)
_____
Why is α(brain)? Consider leg-n, again. How does leg-n perform its actions, for example, how does leg-n stand? It stands when the components of the leg-n system do what is needed for standing. In other words, α(ſ(leg-n)) ↔ α(leg-n). In formula, we have:
Premise 3
α(ſ(x)) ↔ α(x)
_____
From premise 2.3 we get: α(ſ(brain)) ↔ α(brain)
From premise 2.1 we get: α(brain) = mind
Premise 4
α(ſ(brain)) ↔ mind
_____
From premise 2.2 we get: α(ſ(brain)) = ‘(ſ(brain))
From premise 2.8 we get: α(ſ(brain)) ↔ mind
Conclusion II
‘(ſ(brain)) ↔ mind
So, Mind is the system of connected nodes and their interactions. A Concept, then, being existent within such a system cannot help but be itself a subset of that system and operate according to its rules. If a Concept is a set of senses, then the organization of those senses must be of the type possible in the Mind system, i.e., they are organized as a system of interconnected nodes and their interactions (links). So, this is my proposed model of a Concept:
Now I would say this is a pretty satisfactory understanding of Concept, mainly because it takes into account what we know the brain to be, that we know the Mind to be a function of the brain, and that we know a Concept to be a subset of that Mind, in addition to which we know that a concept also has a basis in Reality as a set of connected senses which associate with a Reality (e.g., the smell, color, size, feel, taste, etc. of the can on my desk being senses related to its Reality).
Thus, the two key parts have been done, figuring out what a word is and figuring out what a Concept is. However, we must now relate them one to the other, and how fortuitous it is that, having defined and created two satisfactory understandings of these things, we notice that they are very similar! I shall repaste the two concepts below, side by side, so that their similarity may be more obvious (you might have to click on it to zoom to its full size):
Now, I made our two examples very similar for a reason, which was to better make the next and final conclusion that I shall draw before I wrap everything up: a word, it would appear based on the above arguments, means or represents a Concept by virtue of the Concept and the word sharing a similar network/node map, which brings out the most important feature of the entire model thusfar proposed, which is that the significance within the system is stored foremost as a function of the structure of the network and only later as a function of its components - the way the participants interact is more important than what they are.
So, to wrap it up and open this thread for discussion, I would like to summarize:
  • Words are in the Mind as a matrix of feature relations
  • Concepts are in the Mind as a matrix of sense relations
  • When a word matrix is similar in form to a Concept matrix, we say that the word 'represents' the Concept, i.e., we associate them together
Clearly this does not solve all problems related to word meaning, nor is it intended to; however, it was meant as an attempt to make sense of symbolic meaning within the neural network of the brain, and so it will hopefully suffice as a pretty decent first attempt. Several questions still remain, of course, even accepting all I've said as truth; for example, how is it that a Concept and a word should ever come to have similar structures? How similar must their structures be? Do their structures influence one another? And so on. But rather than attempting to answer all these questions now, I think it would be better if I opened the floor for discussion first, so that we may settle any issues with my proposed model and, of course, entertain any other models that folk out there might have.
Thus, without further bullshitting, I open this for discussion.
Jon
__________
Lieberman, P. (1984) The biology and evolution of language. Massachusetts: Harvard University Press.
Saussure, F. (1959) Nature of the linguistic sign. In S. Blum, Readings in culture and communication, making sense of language (pp. 21-24). New York: Oxford University Press.
Edited by Jon, : No reason given.
Edited by Jon, : coding
Edited by Jon, : tidy up
Edited by Jon, : good tidings

[O]ur tiny half-kilogram rock just compeltely fucked up our starship. - Rahvin

Replies to this message:
 Message 2 by Phat, posted 02-02-2010 12:39 AM Jon has not replied
 Message 3 by Phat, posted 02-03-2010 1:42 PM Jon has replied
 Message 8 by nwr, posted 09-10-2010 12:30 AM Jon has replied

  
Phat
Member
Posts: 18346
From: Denver,Colorado USA
Joined: 12-30-2003
Member Rating: 1.0


Message 2 of 13 (545171)
02-02-2010 12:39 AM
Reply to: Message 1 by Jon
02-01-2010 11:39 PM


Word Associations
Once I knew a man who was my science teacher and mentor in middle school. He was quick, witty, and articulate. A few years later, I saw him again...he had to have an operation. He had a large brain tumor. The operation was successful, but he found that his entire word association memory was damaged. He knew what he wanted to say (or mean) but found that a different word would come out of his consciousness other than the word he intended to use.
For example, if he were to try and say "Can you go in the kitchen and get me a butter knife" he may end up saying "can you go in the...(pause)the other room and get me a table...(meaning to say butter knife.) Evidently, the memory confusion was either temporary or he underwent therapy, but to this day his associations are not as automatic and quick.

This message is a reply to:
 Message 1 by Jon, posted 02-01-2010 11:39 PM Jon has not replied

Replies to this message:
 Message 11 by Bolder-dash, posted 09-10-2010 9:21 AM Phat has not replied

  
Phat
Member
Posts: 18346
From: Denver,Colorado USA
Joined: 12-30-2003
Member Rating: 1.0


Message 3 of 13 (545379)
02-03-2010 1:42 PM
Reply to: Message 1 by Jon
02-01-2010 11:39 PM


features or senses?
  • Words are in the Mind as a matrix of feature relations
  • Concepts are in the Mind as a matrix of sense relations
  • When a word matrix is similar in form to a Concept matrix, we say that the word 'represents' the Concept, i.e., we associate them together
So which happened to my friend?

This message is a reply to:
 Message 1 by Jon, posted 02-01-2010 11:39 PM Jon has replied

Replies to this message:
 Message 4 by Jon, posted 02-03-2010 2:38 PM Phat has not replied

  
Jon
Inactive Member


Message 4 of 13 (545388)
02-03-2010 2:38 PM
Reply to: Message 3 by Phat
02-03-2010 1:42 PM


Re: features or senses?
Phat writes:
  • Words are in the Mind as a matrix of feature relations
  • Concepts are in the Mind as a matrix of sense relations
  • When a word matrix is similar in form to a Concept matrix, we say that the word 'represents' the Concept, i.e., we associate them together
So which happened to my friend?
Likely removing the tumor disrupted his networks. I would assume that the networks that were disrupted were those of word features, since they generally seem less stable (language is always a touchy thing in aphasia, which makes sense knowing what we do of the way language is learned by children). As I said in the post, network nodes can go down without the whole network going down. When one or two nodes are disrupted, the disruption in language is hardly noticeable, but when we start to disrupt large numbers of nodes that represent entire areas of networks, then we can expect that the rewiring of those networks to seal the gaps would create network maps different than before, meaning they would be different from the corresponding Concept maps.
The similarity between one feature matrix (featrix) and another will likely be a model of the similarity between the sense matrices (sentrices) of similar Concepts. Thus, your friend, unable to find an identical featrix to match the sentrix for knife settled on a similar but non-identical one, which happened to be the featrix of \table\.1
Phat writes:
Evidently, the memory confusion was either temporary or he underwent therapy, but to this day his associations are not as automatic and quick.
Well, children are able to map their featrices to corresponding sentrices as they learn a language, and so can anyone who successfully learns a second language, so it is not difficult to see that he could learn new featrix mappings to match with sentrices so as to be working more similarly to how other speakers spoke (i.e., he sees others matching featrixA with sentrixS, and also notices that no one understands him when he matches them the way he does, and so he remaps his featrixA to model sentrixS, and so we get featrixS, which he has rewired to create a relationship that corresponds to the relationships he sees others creating). Whether he was permitted to do this naturally on his own or whether he had help through therapy, I guess it is going to be a similar process either way.
This does bring up an interesting point, though. If my theory is correct, how does it help explain child language learning? For example, when a child learns the word (featrix?) \doggie\ and then applies it to all things that are animals, what can we say of their featrix-sentrix correspondences compared to those of adults? Are their sentrices deficient, or have they merely not built detailed-enough featrices, or are they, like Phat's friend, simply at loss of a proper match and so pick something close? Might it be that there are different degrees of nodes, some more 'prominent' than others, such that Concept dog and Concept cow both have identical sentrices in regards their major nodes, such that, the major nodes of dog being all s/he has mapped into a featrix \doggie\, the child merely matches these alone to an incoming sentrix when searching for a corresponding featrix, so that anything with an identical major-node mapping of the sentrix dog will get called a \doggie\ until the minor nodes become mapped into the featrix? And, when children make these errors, can we regard them as insights into the major-minor node relations within and between sentrices?
__________
1 I have decided to use \ \ to enclose our representations of featrices, and to enclose our representations of sentrices, just for the sake of being clear and consistent.

[O]ur tiny half-kilogram rock just compeltely fucked up our starship. - Rahvin

This message is a reply to:
 Message 3 by Phat, posted 02-03-2010 1:42 PM Phat has not replied

  
Jon
Inactive Member


Message 5 of 13 (545894)
02-06-2010 12:27 AM


... if's and either's...
One point of problem with the current hypothesis as proposed is that it does not well account for language rules which are not meaning-relevant, i.e., rules that do not impact the meaning whether present or absent. Not surprisingly, the study of Syntax presents a number of fine examples, a good one of which is the rule in English for size/age adjectives to precede color adjectives, e.g., "the big, brown dog". In this noun phrase, NP, we have an article, two adjectives, and a head noun.1 A common formula given for such a syntactic construction (e.g., Pinker 1994) is:
NP → (det)AdjnN2
Interestingly, though, this permits the formation of NPs such as: "the black little bug", or "my blue old shirt". Now, clearly these phrases are possible, we just made and said them, but they do not occur in average speech, and the question we must ask ourselves is "why not?".
The first interesting point in this regard is to note that both forms have the same meaning: "the big brown dog" and "the brown big dog". So, this 'ordering rule' as we'll call it is non-meaning-dependent, i.e., the meaning of the utterance is not what triggers the rule. Likewise, the interpretation of such utterances is non-order-dependent, i.e., the order of the constituents of the utterance does not contribute to the meaning. Hence, my original term for such rules as being not meaning-relevant - meaning is not relevant in the rule and the rule is not relevant in the meaning.
Not all ordering rules behave this way, though. The famous hypothetical headline "man bites dog" demonstrates this fact, for the clause is markedly different in meaning from "dog bites man", even though the only alterations made to it were in regards its constituents' orders.
There are, then, clearly, two kinds of agrammaticality: agrammaticality from which meaning does not suffer, and agrammaticality from which meaning does suffer. But, this is not what is most important. In the hypothesis presented in this thread, meaning is derived from a match between a sentrix or set of sentrices and a featrix or set of featrices. If there are two ways to encode the same meaning in "big brown dog" and "brown big dog", then one must wonder if there are two corresponding sentrices to match up with each expression, and if so, why in production only one form is ever noted.
This brings us to a distinction long ago realized in linguistics, especially in regards Syntax, namely that some forms are obligatory, while other forms are not. In our English examples, then, the form regulating the placement of adjectives is non-obligatory, while the form regulating the placement of the subject and object is. This is important, of course, because we may now regard our agrammatical "brown big dog" as merely a conventional aspect of the language, that is, we may now claim no longer that it is an optional ordering rule, but that it is an ordering convention.
Does this solve the problem of two different featrices mapping in accordance with either one or two sentrices? Not quite, but it gets us closer, for we are now able to understand a little of why a conventionally ordered phrase may be meaningful though agrammatical, and if we think of this in terms of featrices, we are left to consider either that there is one corresponding sentrix for every meaningful-yet-agrammatical phrase or that the constituents, in this case adjectives, have featrices which bond symmetrically, thus permitting switches that do not affect meaning. The first assumption is nice, but it would require us to accept that our brains have mapped sentrices for every3 possible agrammatical yet meaningful sentence we might encounter. Of course, anyone who has spent any time in the real world realizes that there are many agrammatical meaningful forms encountered which have never been heard before and are yet meaningful; i.e., for this assumption to be correct, the sentrices corresponding to the agrammatical input would have to be in place before the input is even encountered! Clearly this is absurd, and so assumption one seems strikable.
What about assumption two? Well, it seems powerful, if for no other reason than because it is not assumption one. But there is more to it than this. If we consider a word to be a featrix, then we must understand that a string of words - phrase - is clearly an adjoining of featrices, and that if there are rules regulating the stringing process of words, then these rules must represent underlying constraints on the adjoining of featrices. To better describe this, I should introduce the concept of a receptor and explain how it is used and understood in terms of featrices. If we go back to our original diagram of a word, which was a featrix, we can see how the receptors work.
In the network above, feature 18903 has been shown with a receptor branching on the right side. The location of each receptor in the network determines the various things that the receptor can link with, that is, other featrices that can be adjoined with that featrix in a particular way. A good way to show how this relates to our meaning is to show a diagram of hypothetical featrices in various strings. Below is the phrase "big brown dog" and "brown big dog". As this assumption... err, assumes... the places where meaning is not dependent on order should show featrices with symmetrical/interchangeable receptor positions. In these diagrams, the large block merely represents a featrix in whole, while the knobs and niches represent plugs and receptors in various structural places in the network.
As can be seen, adjectives can be said to have a symmetrically-placed receptor for each plug it has to adjoin with other adjectives, so that instead of having to build different featrices grouping, we merely have a collection of adjectives, each one of which matches with the sentrices that it stands for with its position having no affect on the matching. In the following two cases, however, we do have a case of asymmetrical plug-receptor placement in regards subject status or object status of the noun.
As a result of this, when the nouns are switched, the activated nodes change from subject to object and object to subject, resulting in a change in meaning; having different receptors filled alters the state of the featrix and thus its meaning. So, this is the way assumption two explains obligatory vs. non-obligatory rules. Interestingly enough, there are certain theories of Syntax which consider the nature of branching from heads to be obligatory in some cases and not in others. These theories postulate that languages fall into categories with predictable patterns of obligatory Syntax rules, which seems to reflect an underlying brain-specific constraint on how featrices must map in accordance with sentrices. I have more to say on this, but much of my information comes from a book which is currently locked in the basement of a closed university library. I plan to go in tomorrow to get it and then will likely post more specifics in the regards of obligatory and non-obligatory rules of directional Syntax.
The final thing I want to say, and what I hope will be of more interest to folk on the boards here, is that in thinking of many of these things, I could not help but be reminded of my biological chemistry days and the concepts of proteins being altered to receive different chemicals through the addition of a chemical that alters the proteins normal structure. Is it possible that my hypothesis here is the result of the very chemical nature of brain function? Are there similarities to be drawn? To the chemists on the board, I would love your input on this.
Well, I think that is enough for tonight; I am going to leave it at that. Ramen noodles await this college kid.
Jon
__________
1 For lack of better terminology, I will simply used that already developed for speaking of head-driven syntactic theory; though, I should note that I am unsure whether any of the relationships represented by the use of these terms are accurate in describing any underlying aspects of Language.
2 The → means 'is made up of', the ( ) means a constituent is optional, and the n means that as many as wanted may be used. Here 'det' means determiner, which is, for our intents and purposes, the same as an article.
3 We can say this because, assuming this true, all agrammatical meaningful sentences have to be mapped; assuming the first assumption means that any agrammatical sentence not mapped could not be meaningful; thus this assumptive route allows us to state that all and every possible agrammatical meaningful sentence/phrase be somewhere as a corresponding sentrix/ces.

__________
Pinker, S. (1994) How language works. In S. Blum, Readings in culture and communication, making sense of language (pp. 25-35). New York: Oxford University Press.
Edited by Jon, : mispellings
Edited by Jon, : miner things

"Can we say the chair on the cat, for example? Or the basket in the person? No, we can't..." - Harriet J. Ottenheimer

  
Jon
Inactive Member


Message 6 of 13 (546049)
02-07-2010 8:19 PM


Connectionism
quote:
Wikipedia:
The central connectionist principle is that mental phenomena can be described by interconnected networks of simple and often uniform units. The form of the connections and the units can vary from model to model. For example, units in the network could represent neurons and the connections could represent synapses.
...
The neural network branch of connectionism suggests that the study of mental activity is really the study of neural systems. This links connectionism to neuroscience, and models involve varying degrees of biological realism. Connectionist work in general need not be biologically realistic, but some neural network researchers, computational neuroscientists, try to model the biological aspects of natural neural systems very closely in so-called "neuromorphic networks". Many authors find the clear link between neural activity and cognition to be an appealing aspect of connectionism. This has been criticized[1] as being reductionism.

"Can we say the chair on the cat, for example? Or the basket in the person? No, we can't..." - Harriet J. Ottenheimer

  
Jon
Inactive Member


Message 7 of 13 (546050)
02-07-2010 8:23 PM


The Prevailing Model
I want to introduce some further evidence for my hypothesis, but before I do, I find it necessary to explain and describe a theory which has dominated linguistics for all too long: Transformational Grammar. Only after I explain the deficiencies of Transformational Grammar can I clearly explain the advantages that the hypothesis presented here may have over it. That is, up to this point much of my evidence presented can be quite well explained in a theory of Transformational Grammar, and through some twisting, one may also find Transformational Grammar to 'successfully' explain all the evidence I plan to present; thus, before I can continue presenting evidence, I must show why Transformational Grammar is not to be relied upon as a satisfactory explanation, namely by showing it to be unsatisfactory. I already did some of this in showing how agrammatical meaningful phrases cannot be accounted for by simple transformational rules, but this was not satisfactory evidence in counter of Transformational Grammar, as one could merely argue our transformational rule to be not specific enough and that by specifying extra word-order constraints transformational rules could be used to satisfactorily derive both the grammatical form and meaning from the agrammatical form. So, instead of showing certain proposed transformational rules to be inadequate, here I plan to show the entire system of Transformational Grammar to be faulty and in error as even a general method of the description of the linguistic process.
We should find Transformational Grammar (or, Generative Grammar) to be deficient in explaining Language, especially in its relation to meaning, for several reasons. First, TG asserts the existence of two different linguistic representations within the human mind, corresponding each to different stages of utterance: a Deep Structure and a Surface Structure. According to TG, the Deep Structure (DS) is the underlying linguistic form that all utterances begin as in the Mind, which, through the application of certain transformational rules, is transformed into the Surface Structure (SS), a form of utterance in the Mind corresponding to the spoken/perceived utterance of everyday speech. A Deep Structure phrase is never spoken nor encountered in everyday life, according to TG, and so we must infer it from careful examinations of the Surface Structure - specifically, through comparing various examples of the SS (i.e., various utterances) and from them extrapolating the DS, or starting point, in a way similar to that used by Comparative Linguistics to divine the common origins of words and properties of proto languages. The rule that was given previously (NP → (det)AdjnN) is an example of a transformational rule, because it allows us to begin with our DS NP and, by laying it out according to the given instructions, in this case a defined architecture, transform it into an SS English NP. But do not take my word for it, let's look at an example. To start, we decide what constituents we want; let's pick the following:
"my" = det
"brown" = Adj
"book" = N
Next, we follow our rules to lay them out; the first part of our rule says the determiner comes first, so we set it up first:
my
Now, we place the next item after it:
my brown
And finally:
my brown book
Our rules worked! Better yet, we can alter our SS by simply replacing any one of our constituents with something else:
"her" = det
"lazy" = Adj
"husband" = N
Then we just put it through our transformations again and marvel at our output:
her lazy husband
And of course, no matter what order we list our constituents in, they will always come up in the right place; no matter what we start with, if we are given a determiner, an adjective, and a noun, our rules allow us to build a syntactically-acceptable English NP. Of course, this is because they are English rules, and each language, while composed of the same constituents, has its own rules. Spanish has, for example, the following rule (simplified):
NP → (det)NAdj1(Adjn-1ConjAdjn)
If we want to build an NP in Spanish with the identical portions of our English NP, we lay them out:1
"her" = det
"lazy" = Adj
"husband" = N
Following the rules, we place our determiner first:
her2
Then our noun:
her husband3
Then our adjective:
her husband lazy4
Syntactically, this is a perfect Spanish NP, even though it uses English words, which I did to demonstrate an important aspect of Transformational Grammar: it does not matter what we start with, so long as each constituent ends up where it must be according to the transformations, then our phrases generated can said to be syntactically (grammatically) correct. And, of course, in addition to these rules for formulating grammatically correct expressions, there are rules for optional expressions, such as questions. In theory, using TG, one should be able to reduce each language to a set of rules for generating from the DS to the SS and from the SS to the DS, such that any input in a language could be translated into the DS, have the units exchanged for those in a different language, and then translated up to the SS of that language using its rules.
A universal translation mechanism - such power! And it is this power of the theory that has driven it to such acclaim, yet we should be cautious, because when it comes to the explanatory power of TG, things are not all as they seem. For starters, TG assumes morphemes to fall into categories in which their function - i.e., the bearing they have on meaning to the overall utterance - is set and defined, and which then tags along with the morpheme throughout the transformational process, assuring that it always finds its appropriate path up to the SS. If this is the case, though, then what is the purpose of the transformations in the first place? It would be quite clearly more straight-forward to merely translate the functional information that tags along with each morpheme and then just shit the whole works out of one's mouth, taking caution only to assure that the functional information stays affixed to the morphemes for which it is intended. And indeed, many languages do just that! Take the following example from German:
"nachbar" = subject/nominal
"nachbarn" = object/accusative-dative
5
If such manners of expressing grammatical information are possible, and are more closely linked to the DS representations, then what would possess a language to ever choose an alternative route - a route which requires massive numbers of transformations to 'rebuild', as it were, the Deep Structure and to extract the functional information out of an utterance? Indeed, the history of some languages even shows an exchanging of the quick and efficient system of (what I will call) direct functional translation for that of the slow and clunky indirect functional translation. In fact, you are reading such a language right now. Look at the following from Old English (Anglo-Saxon) (Millward 1996):
"eore" = subject/nominal, singular
"eorena" = genitive, plural
In Present Day English, it would probably look like this (though, of course, you will see that most of the information cannot even be expressed without the presence of an entire clause, and our first word here does not carry even half of the information carried in "eore"):
"earth"
"of earths"
Transformational rules that can generate our OE examples would look like this:
NP → (x){N → root+case}
Rules to generate our PDE examples like this:
NP → N(+case)
PP → PrepNP
Our OE rule, in fact, can generate any and all nouns in the language - in two forms, and with any of three grammatical relations to the sentence - as the following declension shows:
[i]"eor" + "e" = "eore"     | NP → (x){N → root+case}
"eor" + "an" = "eoran"   | NP → (x){N → root+case}
"eor" + "an" = "eoran"   | NP → (x){N → root+case}
"eor" + "an" = "eoran"   | NP → (x){N → root+case}
"eor" + "an" = "eoran"   | NP → (x){N → root+case}
"eor" + "ena" = "eorena" | NP → (x){N → root+case}
"eor" + "um" = "eorum"   | NP → (x){N → root+case}[/i]
All of these are generatable with only NP → (x){N → root+case}. How many phrases are generatable with our PDE rules? Well, it will not be as straight forward, because to get some of the phrases that are needed to express the same meaning, we need to nest and reorder certain of the rules, but it will be something like this:
[i]"earth"     | NP → N[/i]6[i]
"of earth"  | PP → Prep{NP → N}
"to earth"  | PP → Prep{NP → N}
"earths"    | NP → N+case[/i]6[i]
"of earths" | PP → Prep{NP → N+case}
"to earths" | PP → Prep{NP → N+case}[/i]
And this is just a simple couple of phrases. The number of actual generations needed to completely convey the OE information is quite large, as we technically require Verb Phrases (VP) in all of these cases to properly indicate their grammatical rules (see footnote 6). What should be clear from all this is not only that OE had fewer transformational rules than PDE, but also that it could derive semantic and grammatical functions through the application of a far smaller number of rules, i.e., it used fewer steps in the generation process. What all of this means, of course, is that in the discovery and documentation of the world's languages, and the understanding of all the transformational rules involved in the creation of their utterances, a pattern should emerge in which speakers of languages with fewer rules require less time to produce utterances than speakers of languages with more rules. Unfortunately, evidence regarding production times for sentences in various languages is limited-to-absent, and so at present we may only conjecture that such a necessary conclusion seems to be suspect. However, we may make a simple statement about the evolution of a language such as English - Transformational Grammar requires us to accept that languages may give up straight-forward methods of linguistic construction in favor of round-about, indirect, large, and clunky methods in the form of additional rules. If this is truly what happens when a language goes through a change such as English, it seems absurd that so many languages would opt for such a system (e.g., English, German, Spanish, French, etc., just to name Indo-European languages). Transformational Grammar offers no good reason for us to see why languages would choose indirect functional translation over direct functional translation, and in being unable to explain the necessary implications of itself, TG is incomplete and inadequate.
It is clear that a better model, then, is needed, and that TG will simply not suffice, and this shall be the topic of posts to follow. Here the primary purpose, of course, was to lay out some of the inadequacies of the TG theory, and I think I have accomplished that goal. In my next post I plan to go into detail on some of the aspects of the model introduced in this thread and examine how they work to explain human Language.
Jon
__________
1 In Spanish: "su" = det, "perezoso" = Adj, "esposo" = N
2 Sp. su
3 Sp. su esposo
4 Sp. su esposo perezoso
5 En. neighbor. While it is true that from German such examples are rather rare, they are numerous in other languages. Somali, for example, encodes the information from the determiner as a morpheme attached to the noun; for example (Saeed 1999, p. 174):
"gedka" ~ "the tree"
"gedkn" ~ "this tree"
6 As was mentioned already, however, this phrase is not as fully informational in PDE. To make it really as informative as the OE equivalent, we would have to place it in a sentence, as only its placement in one could determine fully its function. Thus a rule that actually could generate both the nominal (subject) and accusative (object) information would have to incorporate other elements; at minimum, such a rule would look something like:
S → NPVP
VP → V(NP)
These rules would be additional to the two rules we already need. However, I have decided not to emphasize this point in the body mainly because it is not necessary to do so - the fact that PDE uses more transformational rules than OE should be clear enough.

__________
Levinson, S. C. (2003) Language and mind; let's get the issues straight! In S. Blum, Readings in culture and communication, making sense of language (pp. 95-105). New York: Oxford University Press.
Millward, C. M. (1996) A biography of the English language. 2nd ed. Massachusetts: Thomson Wadsworth.
Saeed, J. (1999). Somali. Pennsylvania: John Benjamins Publishing Company.
Edited by Jon, : Structure.
Edited by Jon, : grammar... or, syntax
Edited by Jon, : Dogs and books.

"Can we say the chair on the cat, for example? Or the basket in the person? No, we can't..." - Harriet J. Ottenheimer

Replies to this message:
 Message 9 by frako, posted 09-10-2010 7:12 AM Jon has replied

  
nwr
Member
Posts: 6412
From: Geneva, Illinois
Joined: 08-08-2005
Member Rating: 4.5


Message 8 of 13 (580557)
09-10-2010 12:30 AM
Reply to: Message 1 by Jon
02-01-2010 11:39 PM


Re: ... and where are they?
I didn't originally comment on this. I am coming back to it after a reference from Cognitive Predictionism.
Jon writes:
With this model, we can explain the meaning behind every word as a relationship with other words; indeed, a very concrete example of this notion is the Dictionary - each word is defined using others, no word in the language cannot be described with other words in the language.
I think that is subject to the problems raised by Putnam's "cats and cherries" argument. And I suspect that you are depending on an associative learning scheme that might be subject to Quine's skeptical "gavagai" argument.

This message is a reply to:
 Message 1 by Jon, posted 02-01-2010 11:39 PM Jon has replied

Replies to this message:
 Message 12 by Jon, posted 09-10-2010 9:30 AM nwr has seen this message but not replied

  
frako
Member (Idle past 333 days)
Posts: 2932
From: slovenija
Joined: 09-04-2010


Message 9 of 13 (580580)
09-10-2010 7:12 AM
Reply to: Message 7 by Jon
02-07-2010 8:23 PM


Re: The Prevailing Model
the only problem i see is when you use this to translate from a language that has singular and plural, to a language that has singular, dual, and plural
exsample:
my lazy sisters in slovenian moje lene sestre
all in plural all is well and ok if you try to translate
moji leni sestri to english you can say my lazy sisters but its not equal since it implys anything from 2 to infinety of lazy sisters,
or you can translate it moji leni sestri = my 2 lazy sisters adding the 2
hope this helps in anyway cause i got lost somwhere in the middle of your post
Edited by frako, : No reason given.

This message is a reply to:
 Message 7 by Jon, posted 02-07-2010 8:23 PM Jon has replied

Replies to this message:
 Message 10 by Jon, posted 09-10-2010 9:14 AM frako has not replied

  
Jon
Inactive Member


Message 10 of 13 (580608)
09-10-2010 9:14 AM
Reply to: Message 9 by frako
09-10-2010 7:12 AM


Re: The Prevailing Model
the only problem i see is when you use this to translate from a language that has singular and plural, to a language that has singular, dual, and plural
exsample:
my lazy sisters in slovenian moje lene sestre
all in plural all is well and ok if you try to translate
moji leni sestri to english you can say my lazy sisters but its not equal since it implys anything from 2 to infinety of lazy sisters,
or you can translate it moji leni sestri = my 2 lazy sisters adding the 2
hope this helps in anyway cause i got lost somwhere in the middle of your post
I am not sure I see how this is a problem. The theory easily accounts for translation as a matching of sentrices obtained from the input featrices of one language to output featrices from a different language. It may be that fully multilingual individuals have identical Concepts stored into different sentrices, but whether or not this is the case, the theory can still easily explain translation, and it does so in a similar way to all theories on language (except the Chomskyan ones, of course, which throw meaning out the window) by asserting that the meaning of something in one language is given a new form with as close-as-possible a meaning in another language.
Jon

"Can we say the chair on the cat, for example? Or the basket in the person? No, we can't..." - Harriet J. Ottenheimer
"Dim bulbs save on energy..." - jar

This message is a reply to:
 Message 9 by frako, posted 09-10-2010 7:12 AM frako has not replied

  
Bolder-dash
Member (Idle past 3658 days)
Posts: 983
From: China
Joined: 11-14-2009


Message 11 of 13 (580609)
09-10-2010 9:21 AM
Reply to: Message 2 by Phat
02-02-2010 12:39 AM


Re: Word Associations
I am always fascinated by these kinds of mental interruptions. I have a friend who suffers from extremely mind brain seizures as he describes them. It manifests itself only as an inability to name an object at certain times when he is experiencing the intrusion. You wouldn't know it was anything, but he draws blanks on calling something by its name at times. For instance if you point at the tv, and ask him what it is, he will say, I know what that is, its the thing that you watch-I can't tell you the name right now. And then one minute later he can tell you it exactly. He always knows when this is happening, and he can tell you it is happening.
Just a side note with your friend, perhaps when he says verbally to you that he knows what he wants to say but can't-maybe he is actually trying to say he has no idea what he is trying to say but it just comes out of his mouth wrong. Ha, oh well its kind of funny in an oh life enjoys playing mean tricks on us all kind of way.

This message is a reply to:
 Message 2 by Phat, posted 02-02-2010 12:39 AM Phat has not replied

  
Jon
Inactive Member


Message 12 of 13 (580610)
09-10-2010 9:30 AM
Reply to: Message 8 by nwr
09-10-2010 12:30 AM


Re: ... and where are they?
I think that is subject to the problems raised by Putnam's "cats and cherries" argument. And I suspect that you are depending on an associative learning scheme that might be subject to Quine's skeptical "gavagai" argument.
Quine is correct to believe language generally vague and indeterminate. However, we are not prohibited from understanding linguistic expressions, which proves that language is capable of specifying and determining meaning at least to a degree sufficient for communication. As one learns a language, especially if it is their L1, they progressively refine the associations between a linguistic expression and a related Concept, but not, to be sure, in an error-free manner. Earlier in the thread I stated the following:
quote:
Jon in Message 4:
For example, when a child learns the word (featrix?) \doggie\ and then applies it to all things that are animals, what can we say of their featrix-sentrix correspondences compared to those of adults? Are their sentrices deficient, or have they merely not built detailed-enough featrices, or are they, like Phat's friend, simply at loss of a proper match and so pick something close? Might it be that there are different degrees of nodes, some more 'prominent' than others, such that Concept dog and Concept cow both have identical sentrices in regards their major nodes, such that, the major nodes of dog being all s/he has mapped into a featrix \doggie\, the child merely matches these alone to an incoming sentrix when searching for a corresponding featrix, so that anything with an identical major-node mapping of the sentrix dog will get called a \doggie\ until the minor nodes become mapped into the featrix? And, when children make these errors, can we regard them as insights into the major-minor node relations within and between sentrices?
It is only after extended exposure to the world and linguistic input that one is capable of making and understanding statements that differentiate clearly between cows and dogs... or cats and cherries. Nobody learns a language overnight; the associations require extensive brain activity to be correctly mapped with enough distinction and detail to make the associations useful for communication.
Jon
Edited by Jon, : No reason given.

"Can we say the chair on the cat, for example? Or the basket in the person? No, we can't..." - Harriet J. Ottenheimer
"Dim bulbs save on energy..." - jar

This message is a reply to:
 Message 8 by nwr, posted 09-10-2010 12:30 AM nwr has seen this message but not replied

  
Jon
Inactive Member


Message 13 of 13 (584355)
10-01-2010 12:53 PM


Owning the Article
I am unsure as to what degree of impact these observations may have on the proposed theory, but I figured, finding no other clearly explicable cause, to post them here for discussion and consideration.
It is a common thing that a noun holds power over its associated articles, whose presence, lack, singularity, or plurality are all dependent on the sense intended within the nounthe sense being so endowed in the noun, no doubt, by the faculty of the speaker to so intend. It is less common, however, to consider the role of the adjective in making decisions on possible associated articles. An example, though, makes it clear that adjectives, in English at least, possess this power:
Men are X.
Bigger men are X.
The biggest men are X.
It is clear, here, that it is not the noun 'men' triggering our use of the article 'the' in the third statement, but rather that it is the adjective, which, being in the superlative, requires use of the definite article. I figured this may be interesting in that it calls into question, in my judgement, the view of the authority of the noun over its articles. If articles are not bound to their nouns, then it sets doubt on the accuracy of the current classificatory system which sets articles as a subset of noun phrases.
Who owns the article, and what does this say about mental linguistic processes?
Jon

"Can we say the chair on the cat, for example? Or the basket in the person? No, we can't..." - Harriet J. Ottenheimer
"Dim bulbs save on energy..." - jar

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024