In the course of my undergraduate career, I have been asked my major literally hundreds of times, and each time, my response generates surprise mingled with a little bit of confusion. The average person is unlikely to recognize the connection between Neuroscience and Linguistics, but from the very beginning, I’ve been fascinated with how language works, and an integral part of understanding how language works is deciphering how language works in the human brain.
Throughout the course of my undergraduate research career in cognitive neuroscience, I have been attempting to synthesize information in the connectionist model of reading with a more general theory of cognitive semantics. Current models of reading suggest that written text is comparable to other objects in the external world, and perhaps the human brain learns to deal with them in similar ways. For example, Frith proposes several stages through which learners must progress on their way to reading proficiency. In what he terms the logographic stage, word processing has not yet become specialized, and individual words are represented as objects associated with their particular global features, meaning that there will be a high degree of inaccuracy if the font or pattern is altered. This means that a child may not recognize the word bat if it is written as BAT. In my eyes, this is quite similar to the concept of underextension: Children learning how to use their language in relation to objects in the real world often fail to recognize certain objects as belonging to the same class. In the beginning, learning how to read is very much like learning how to name objects appropriately: You need to know that a capital b is just as much a b as a lowercase b, just as you need to know that a poodle is just as much a dog as a cocker spaniel. Knowing the former is crucial to your understanding of the word form presented, and knowing the latter is crucial to your understanding of the concept of dog.
Briefly, the connectionist model of processing written (or spoken) language argues that there is a unified neural network that synthesizes information from phonology, syntax, semantics, etc. using statistical relationships. If we encounter a word, concept, or pattern of letters more frequently, some connections associated with this entity will become stronger for the next set of computations. Though this may seem like a unique way of dealing with language processing, many other mental processes are theorized to work in a similar way, such as object recognition as in the example above. I propose going further with this idea in the future because I believe object recognition is also intimately related to one’s internal language processing. The processes of language and object recognition may be tied together so well that we cannot wholly separate them at this point, however. I also believe that one can interpret the evolution of metaphor in a similar way. In the beginning, a metaphor must be explained and continually explained because the similar concepts that connect the usages are not as well defined, in reality as well as in the human mind. Once the human brain begins to associate the concepts, a connection is strengthened. It is further strengthened by repeated use of the metaphor, perhaps so much so that the original connections needed to sustain the metaphor are no longer needed. In this case, we may have a dead metaphor whose connection to the original usage is no longer apparent. (It is not to say that these processes are conscious. Our brains manipulate this information constantly and quite subtly.)
Inhibition and priming can also be related to the realm of language, though these are most often considered physiological processes that apply to other types of experience. When I understand the concept of go, I am connecting it to motion of some sort, and my brain starts to file through the different types of motion the word can indicate. Trying to define the meaning of the word without its context is rather difficult. However, when we place the word go in a sentence, the other words in the sentence will cause a particular set of neurons to fire, and these activated neurons will activate others that are connected to particular meanings of the word go and silence neurons that are connected to conflicting meanings. In this way, I can come to the conclusion that the meanings of go in I am going crazy and I am going to the store are not the same. However, when I am only presented with the word go and am asked to define it, what will I say? I will most likely give a response that is consistent with the most statistically encountered interpretation of the concept because those would be the strongest connections in my brain. This concept of strengthening connections is actually a very generally applied phenomenon known as long-term potentiation (LTP), and it is an observed phenomenon that is applied to all types of learning scenarios. But I believe that the concept of priming can override the statistically more encountered concept. If I have just finished talking about going crazy, and someone asks me what that word go means, the most recent example will most likely be the one I use to tailor my definition. Those will be the strongest connections in my mind AT THAT MOMENT because they have just been activated, and competing interpretations may still be recovering from some type of inhibition.
From the examples I have given above, it’s probably pretty apparent that I firmly believe in a more cognitive theory of linguistics (and pretty much any type of discipline that can be connected to mental processes). However, I do believe that formal disciplines offer great ways of modeling some of the more complex concepts. Drawing a syntax tree may be able to give us an idea of just one little piece of what information our brains may be using to process linguistic information, for example. But it is not an entirely accurate description of all the nuances of that processing. Chemical formulae are used as shorthand to represent much more complicated (and much less neat) chemical processes that occur in the real world, and perhaps one can think of formal semantics as adopting a kind of shorthand to represent an infinitely complex arrangement of neural processes that the realm of cognitive semantics is beginning to tease apart.
Subscribe to:
Post Comments (Atom)
"The average person is unlikely to recognize the connection between Neuroscience and Linguistics [...]" What?! How do they think humans learn to speak, or better yet, how language evolved over time?
ReplyDeleteYour comments make sense to me and I've never taken Neuroscience or (theoretical) Linguistics. Then again, I am an engineer ... ;-)
~B.
you're also amazingly intelligent. :)
ReplyDeletethis is the kind of stuff i'm really, really interested in studying, if i choose to pursue an academic lifestyle.
wonder what would have happened if *I* had decided to become an engineer lol