The Other Side of Noam Chomsky's Brilliant Mind
The following is an excerpt from Power Systems: Conversations on Global Democratic Uprisings and the New Challenges to U.S. Empire by Noam Chomsky, interviews with David Barsamian (Published by Metropolitan Books (2013).
It’s been more than five decades since you first wrote about universal grammar, the idea of an inborn capacity in every human brain that allows a child to learn language. What are some of the more recent developments in the field?
Well, that gets technical, but there’s very exciting work going on refining the proposed principles of universal grammar. The concept is widely misunderstood in the media and in public discussions. Universal grammar is something different: it is not a set of universal observations about language. In fact, there are interesting generalizations about language that are worth studying, but universal grammar is the study of the genetic basis for language, the genetic basis of the language faculty. There can’t be any serious doubt that something like that exists. Otherwise an infant couldn’t reflexively acquire language from whatever complex data is around. So that’s not controversial. The only question is what the genetic basis of the language faculty is.
Here there are some things that we can be pretty confident about. For one thing, it doesn’t appear that there’s any detectable variation among humans. They all seem to have the same capacity. There are individual differences, as there are with everything, but no real group differences—except maybe way at the margins. So that means, for example, if an infant from a Papua New Guinea tribe that hasn’t had contact with other humans for thirty thousand years comes to Boulder, Colorado, it will speak like any kid in Colorado, because all children have the same language capacity. And the converse is true. This is distinctly human. There is nothing remotely like it among other organisms. What explains this?
Well, if you go back fifty years, the proposals that were made when this topic came on the agenda were quite complex. In order just to account for the descriptive facts that you saw in many different languages, it seemed necessary to assume that universal grammar permitted highly intricate mechanisms, varying a lot from language to language, because languages looked very different from one another.
Over the past fifty to sixty years, one of the most significant developments, I think, is a steady move, continuing today, toward trying to reduce and refine the assumptions so that they maintain or even expand their explanatory power for particular languages but become more feasible with regard to other conditions that the answer must meet.
Whatever it is in our brain that generates language developed quite recently in evolutionary time, presumably within the last one hundred thousand years. Some- thing very significant happened, which is presumably the source of human creative endeavor in a wide range of fields: creative arts, tool making, complex social structures. Paleoanthropologists sometimes call it “the great leap forward.” It’s generally assumed, plausibly, that this change had to do with the emergence of language, for which there’s no real evidence before in human history or in any other species. Whatever happened had to be pretty simple, because that’s a very short time span for evolutionary changes to take place.
The goal of the study of universal grammar is to try to show that there is indeed something quite simple that can meet these various conditions. A plausible theory has to account for the variety of languages and the detail that you see in the surface study of languages—and, at the same time, be simple enough to explain how language could have emerged very quickly, through some small mutation of the brain, or something like that. There has been a lot of progress toward that goal and, in a parallel effort, to try to account for the apparent variability of languages by showing that, in fact, the perceived differences are superficial. The seeming variability has to do with minor changes in a few of the structural principles that are fixed.
Discoveries in biology have encouraged this line of thinking. If you go back to the late 1970s, François Jacob argued that it could well turn out—and probably is true— that the differences between species, let’s say an elephant and a fly, could be traceable to minor changes in the regulatory circuits of the genetic system, the genes that determine what other genes do in particular places. He shared the Nobel Prize for early work on this topic.
It looks like something similar may be true of language. There’s now work on an extraordinarily broad range of typologically different languages—and, more and more, it looks like that. There’s plenty of work to do, but a lot of this research falls into place in ways that were unimaginable thirty or forty years ago.
In biology it was plausible quite recently to claim that organisms can vary virtually without limit and that each one has to be studied on its own. Nowadays that has changed so radically that serious biologists propose that there’s basically one multicellular animal—the “universal genome”—and that the genomes of all the multicellular animals that have developed since the Cambrian explosion half a billion years ago are just modifications of a single pattern. This thesis hasn’t been proven, but it is taken seriously.
Something similar is going on, I think, in the study of language. Actually, I should make it clear that this is a minority view, if you count noses. Most of the work on language doesn’t even comprehend these developments or take them seriously.
Is the acquisition of language biological?
I don’t see how anyone could doubt that. Just consider a newborn infant. The newborn is barraged by all kinds of stimuli, what William James famously called “one great blooming, buzzing confusion.”1 If you put, say, a chimpanzee or a kitten or a songbird in that environment, it can only pick out what’s related to its own genetic capacities. A songbird will pick out a melody of its species or something from all this mass because it’s designed to do that, but it can’t pick out anything that’s relevant to human language. On the other hand, an infant does. The infant instantly picks language-related data out of this mass of confusion. In fact, we now know that this goes on even in the uterus. Newborn infants can detect proper- ties of their mother’s language as distinct from certain— not all, but certain—other languages.
And then comes a very steady progression of acquisition of complex knowledge, most of it completely reflexive. Teaching doesn’t make any difference. An infant is just picking it out of the environment. And it hap- pens very fast, in a very regular fashion. A lot is known about this process. By about six months, the infant has already analyzed what’s called the prosodic structure of the language, stress, pitch—languages differ that way— and has sort of picked out the language of its mother or whatever it hears, its mother and its peers. By about nine months, roughly, the child has picked out the relevant sound structure of the language. So when we listen to Japanese speakers speaking English, we notice that, from our point of view, they confuse “r” and “l,” meaning they don’t know the distinction. That’s already fixed in an infant’s mind by less than a year old.
Words are learned very early, and, if you look at the meaning of a word with any care, it’s extremely intricate. But children pick up words often after only one exposure, which means the structure has got to be in the mind already. Something is being tagged with a particular sound. By, say, two years, there’s pretty good evidence that the children have mastered the rudiments of the language. They may just produce one-word or two-word sentences, but there’s now experimental and other evidence that a lot more is in there. By three or four, a normal child will have extensive language capacity.
Either this is a miracle or it’s biologically driven. There are just no other choices. There are attempts to claim that language acquisition is a matter of pattern recognition or memorization, but even a superficial look at those proposals shows that they collapse very quickly. It doesn’t mean that they’re not being pursued. In fact, those lines of inquiry are very popular. In my view, though, they’re just an utter waste of time.
There are some very strange ideas out there. For instance, a lot of quite fashionable work claims that children acquire language because humans have the capacity to understand the perspective of another person, according to what’s called theory of mind. The capacity to tell that another person is intending to do something develops in normal children at roughly age three or four. But, in fact, if you look at the autism spectrum, one of the classic syndromes is failure to develop theory of mind. That’s why autistic kids, or adults for that matter, don’t seem to understand what other people’s intentions are. Nevertheless, their language can be absolutely perfect. Furthermore, this capacity to understand the intention of others develops long after the child has mastered almost all the basic character of the language, maybe all of it. So that can’t be the explanation.
There are other proposals which also just can’t be true, but are still pursued very actively. You read about them in the press, just as you read things about other organ- isms having language capacity. There’s a lot of mythology about language, which is very popular. I really don’t want to sound too dismissive, but I feel dismissive. I think these ideas can’t be considered seriously.
Whatever our language faculty is, humans develop it very quickly, on very little data. In some domains, like the meaning of expressions, there’s virtually no data. Nevertheless it’s picked up very quickly and very precisely, in complex ways. Even with sound structure, where there’s a lot of data—there are sounds around, you hear them—it’s still a regular process and it’s distinctively human. Which is striking, because it’s now known that the auditory systems of higher apes, say chimpanzees, appear to be very similar to the human auditory system, even picking out the kinds of sounds that play a distinctive role in human language. Nevertheless, it’s just noise for the ape—they can’t do anything with it. They don’t have the analytical capacities, whatever they are.
What’s the biological basis for these human capacities? That’s a very difficult problem. We know a lot, for example, about the human visual system, partly through experimentation. At the neural level, we know about it primarily from invasive experiments with other species. If you conduct invasive experiments on other mammals, cats or monkeys, you can find the actual neurons in the visual system that are responding to a light moving in a certain direction. But you can’t do that with language. There is no comparative evidence, because other species don’t have the capacity and you can’t do invasive experiments with humans. Therefore, you have to find much more complex and sophisticated ways to try to tease out some evidence about how the brain is handling all this. There’s been some progress in this extremely difficult problem, but it’s very far from yielding the kind of information you could get from experimentation.
If you could experiment with humans, say, isolating a child and controlling carefully the data presented to it, you could learn quite a lot about language. But obviously you can’t do that. The closest we’ve come is looking at children with sensory deprivation, blind children, for example. What you find is pretty amazing. For example, a very careful study of the language of the blind found that the blind understand the visual words look, see, glare, gaze, and so on quite precisely, even though they have zero visual experience. That’s astonishing. The most extreme case is actually material that my wife, Carol, worked on, adults who were both deaf and blind. There are techniques for teaching language to the deaf-blind. Actually, Helen Keller, who is the most famous case, invented them for herself. It involves putting your hand on somebody’s face, with your fingers on the cheeks and thumb on the vocal cords. You get some data from that, which is extremely limited. But that’s the data available to the deaf-blind, and they have pretty remarkable language capacity. Helen Keller was incredible, a great writer, very lucid. She’s an extreme case.
Carol did a study here at MIT. She found in working with people with sensory deprivation that they achieved pretty remarkable language capacity. You have to do quite subtle experiments to find things they don’t know. In fact, they managed to get along by themselves. The primary subject, the one most advanced, was a man who was a tool and die maker, I think. He worked in a factory somewhere in the Midwest. He lived with his wife, who was also deaf-blind, but they found ways to communicate with buzzers in the house and things that you could touch that vibrated. He was able to get from his house to Boston for the experiments by himself. He carried a little card which said on it, “I am deaf-blind. May I put my hand on your face?” so, if he got lost, if somebody would let him do that, he could communicate with them. And he lived a pretty normal life.
One very striking fact was that all of the cases that succeeded were people who had lost their sight and hearing at about eighteen months old or older—it was primarily through spinal meningitis in those days. People who were younger than that when they became deaf-blind never learned language. There weren’t enough cases to actually prove anything, so the results of the study were never published, but this was a pretty general result. Helen Keller fits. She was twenty months old when she lost her sight and hearing. It suggests, at least, that by eighteen or twenty months, a tremendous amount of language is already known. It can’t be exhibited but it’s in there somewhere, and can possibly be teased out later.
It’s known that the ability to acquire language starts decreasing rather sharply by about the mid-teens.
That’s descriptively correct, although, again, it’s not 100 percent correct. There is individual variation. There are individuals who can pick up a language virtually natively at a much later age. Actually, one of them was in our department. Kenneth Hale, one of the great modern linguists, could learn a language like a baby. We used to tease him that he just never matured.
That’s an exception?
Yes. By and large, what you said is true. The basis is not really known, but there are some thoughts about it. One thing we know is that, from the very beginning, brain development entails losing capacities. Your brain is originally set up so that it can acquire anything that a human can acquire. In the case of language, say, it’s set up so that you can acquire Japanese, Bantu, Mohawk, English, whatever. Over time that declines. In some cases, it declines even after a few months of age. What’s happening across all cognitive capacities, not only in the case of language, is that synaptic connections, connections inside the brain, are being lost. The brain is being simplified, it’s being refined. Certain things are becoming more effective, other things are just gone. There’s apparently a lot of synaptic loss around the period of puberty or shortly beforehand, and that could be relevant.
I attended one of your seminars in linguistics here at MIT a few years ago, and I was struck by a couple of things. First of all, I was one of the few non-Asians in your class. It was mostly South Asians and East Asians. But the other thing was the extent to which math was involved. You were constantly writing formulas on the blackboard.
We should be clear about that. It’s not deep mathematics. It’s not like proving hard theorems in algebraic topology or something. But there’s good reason why some sophistication in mathematics is at least advantageous, maybe necessary, for advanced work. The basic reason is that language is a computational system. So whatever else it is, the capacity we’re both using and sharing is based on a computational procedure that forms an infinite array of hierarchically structured expressions.
A lot of people conflate linguistics with the ability to speak many languages. So in your case, people think, Oh, Chomsky, he must know ten or twelve languages. But in fact linguistics is another universe. Explain why the study of language is important. Clearly, you’re animated by it. You’ve devoted most of your life to it.
I should say, sometimes there’s a distinction made between languist and linguist. A languist is somebody who can speak a lot of languages. A linguist is some- body who is interested in the nature of language.
Why is it interesting? Think about the picture that I presented before, which I think is fairly uncontroversial. At some time in the very recent past, from an evolutionary point of view, something quite dramatic happened in the human lineage. Humans developed what we now have: a very wide range of creative capacities that are unknown in the previous record or among other animals. There is no analogue to them. That’s the core of human cognitive, moral, aesthetic nature—and right at the heart of it was the emergence of language.
In fact, it’s very likely that language was the lever by which the other capacities develop. In fact, other capacities may just be piggybacking off language. It’s possible that our arithmetical capacities and—quite likely—our moral capacities developed in a comparable way, maybe drawing from the analytical, computational mechanisms that yield language in all of its rich complexity. To the extent that we understand these other things, which is not very much, it seems that they’re using the same or similar computational mechanisms.
Clearly, culture influences and shapes language, even if it doesn’t determine it.
That’s a common comment, but it’s almost meaningless. What’s culture? Culture is just a general term for everything that goes on. Yes, sure, everything that goes on influences language.
If we’re, let’s say, in a violent environment, doesn’t that shape the vocabulary? Wouldn’t that lead us to talk about “epicenter” and “Ground Zero” and “terrorism” and other terms in the lexicon of violence?
Sure, there’s an effect on lexical choices. But that’s peripheral to language. You could take any language that exists and add those concepts to it—a fairly trivial matter. But we don’t know anything really about the effects of culture on lexical choices. In my view, it’s unlikely cultural environments meaningfully affect the nature of language. Take, say, English, and trace it back to earlier periods. English was different in Chaucer’s time or King Arthur’s time, but the language hasn’t fundamentally changed, the vocabulary has. Not long ago Japan was a feudal society, and now it’s a modern technological society. The Japanese language has changed, of course, but not in ways that reflect those changes. And if Japan went back to being a feudal society, the language wouldn’t change much either.
Vocabulary does, of course. You talk about different things. For example, the tribe in Papua New Guinea that I mentioned before wouldn’t have a word for computer. But again that’s fairly trivial. You could add the word for computer. Ken Hale’s work from the 1970s on this question is quite instructive. He was a specialist on Australian aboriginal languages, and he showed that many of these languages appear to lack elements that are common in the modern Indo-European languages. For example, they don’t have words for numbers or colors and they don’t have embedded relative clauses. He studied this topic in depth and showed that these gaps were quite superficial. So, for example, the tribes he was working with didn’t have numbers, but they had absolutely no problem counting. As soon as they moved into a market society and had to deal with counting, they just used other mechanisms. Instead of number words, they would use their hand for five, two hands for ten. They didn’t have color words. Maybe they just had black and white, which apparently every language has. But they used expressions such as like blood for what we would call red.
Hale’s conclusion was that languages are basically all the same. There are gaps. We have many gaps in our language that other languages don’t have, and conversely, they have gaps that we don’t have. It’s a little bit like what I said before about whether organisms vary infinitely or whether there’s a universal genome. If you take a look at organisms, they look wildly different, so it was quite natural to assume fifty years ago that they vary in every possible way. The more we have learned, the less plausible that seems. There’s a lot of conservation of genes. Yeasts have a genetic structure not all that different from ours in many ways, although yeasts look very different from us. But there are fundamental biological processes that just show up differently on the surface and seem differ- ent until you understand them. And something like that appears to be the case with language. Ken’s work on this topic is the most sophisticated. There’s a lot of popular discussion about “similar data” now, but most of it is extremely superficial and ignorant. In fact, there’s almost nothing that’s discussed now that he didn’t talk about in a much more serious way forty years ago.
People who just read your books don’t realize, I think, that you have a mischievous side. At the linguistics seminar I attended, I told you that I had to leave early, and you told me to shake my head back and forth, as I was leaving the classroom, and say, “I don’t know what that guy Chomsky is talking about. This is just a lot of nonsense.”
That’s what this all sounds like if you don’t have the right background. There’s this commonsense idea: when I talk, I don’t think about any of those things linguists are talking about. I don’t have any of these structures in my head. So how can they be real? This kind of deep anti- intellectualism, an insistence on ignorance, runs through a large part of the culture. With discussions of language, it’s almost ubiquitous.
You could say the same thing about vision. So, for example, one of the most interesting things known about the visual system is that it has core properties that interpret complex reality in terms of rigid objects in motion. In fact, you almost never see rigid objects in motion. It’s not part of experience. But that’s the way the visual system works.
Take, say, a baseball game. When you interpret an outfielder catching a fly ball, you don’t and he doesn’t introspect into the method by which he’s doing that, which is a pretty remarkable thing. Like how does an outfielder know instantaneously where to run as soon as the crack of a bat takes place? It turns out that’s a pretty sophisticated calculation and pretty well understood. But you can’t introspect into it. In fact, if you did, you would fall on your face and you wouldn’t catch the ball. It’s sort of like trying to introspect on how you digest your food. You can’t do it. People feel that they ought to be able to do it in cognitive domains because we’re partially conscious—at least, we have a consciousness of some of the superficial aspects of our actions. For example, you know you’re running to catch a ball. But consciousness of superficial aspects of our activity doesn’t give you any insight into the internal computations of the brain that allow these actions to take place.
Copyright © 2012 by Noam Chomsky and David Barsamian. Excerpted from Power Systems: Conversations on Global Democratic Uprisings and the New Challenges to U.S. Empire by Noam Chomsky, interviews with David Barsamian. Published by Metropolitan Books, an imprint of Henry Holt and Company (2013).