Syntax. Andrew Carnie

Читать онлайн.
Название Syntax
Автор произведения Andrew Carnie
Жанр Языкознание
Серия
Издательство Языкознание
Год выпуска 0
isbn 9781119569312



Скачать книгу

and subconscious knowledge. Conscious knowledge (like the rules of algebra, syntactic theory, principles of organic chemistry or how to take apart a carburetor) is learned. A lot of subconscious knowledge, like how to speak or the ability to visually identify discrete objects, is acquired. In part, this explains why classes in the formal grammar of a foreign language often fail abysmally to train people to speak those languages. By contrast, being immersed in an environment where you can subconsciously acquire a language is much more effective. In this text we’ll be primarily interested in how people acquire the rules of their language. Not all rules of grammar are acquired, however. Some facts about i-language seem to be built into our brains, or innate.

      You now have enough information to answer GPS6.

       6.2 Innateness: Parts of i-Language as Instincts

      If you think about the other types of knowledge that are subconscious, you’ll see that many of them (for example, the ability to walk) are built directly into our brains – they are instincts. No one had to teach you to walk (despite what your parents might think!). Kids start walking on their own. Walking is an instinct. Probably the most controversial claim that Noam Chomsky has made is that parts of i-language are also an instinct. That is many parts of i-language are built in, or innate. Chomsky claims that much of i- language is an ability hard-wired into our brains.

      Obviously, particular languages (e-languages) are not innate. It is never the case that a child of Slovak parents growing up in North America who has never been spoken to in Slovak grows up speaking Slovak. They’ll speak English (or whatever other language is spoken around them). So, on the surface it seems crazy to claim that language is an instinct. But when we are talking about i-languages, there are very good reasons to believe, however, that a human facility (the Human Language Capacity) for language is innate. We call the innate parts of the HLC, Universal Grammar (or UG).

       6.3 The Logical Problem of Language Acquisition

      What follows is a fairly technical proof of the idea that parts of our linguistic system are at least plausibly construed as an innate, in-built system. If you aren’t interested in this proof (and the problems with it), then you can reasonably skip ahead to section 6.4.

      The argument in this section is that a productive system like the rules of Language probably could not be learned or acquired. Infinite systems are in principle, given certain assumptions, both unlearnable and unacquirable. Since we’ll show that syntax is an infinite system, we shouldn’t have been able to acquire it. So it follows that it is built in. The argument presented here is based on an unpublished paper by Alec Marantz, but is based on an argument dating back to at least Chomsky (1965).

      1 First here’s a sketch of the proof, which takes the classical form of an argument by modus ponens:

      2 Premise (i): Syntax is a productive, recursive and infinite system.

      3 Premise (ii):Rule-governed infinite systems are unacquirable.

      4 Conclusion: Therefore syntax is an unacquirable system. Since we have such a system, it follows that at least parts of syntax are innate.

      Let’s start with premise (i): i-language is a productive system. That is, you can produce and understand sentences you have never heard before. For example, I can practically guarantee that you have never heard or read the following sentence:

      18) The dancing chorus-line of elephants broke my television set.

      The magic of syntax is that it can generate forms that have never been produced before. Another example of this productive quality lies in what is called recursion. It is possible to utter a sentence like (19):

      19) Rosie reads magazine articles.

      It is also possible to put this sentence inside another sentence, like (20):

      20) I think [Rosie reads magazine articles].

      Similarly, you can put this larger sentence inside of another one:

      21) Drew believes [I think [Rosie reads magazine articles]]

      And, of course, you can put this bigger sentence inside of another one:

      22) Dana doubts that [Drew believes [I think [Rosie reads magazine articles]]].

      and so on, and so on ad infinitum. It is always possible to embed a sentence inside of a larger one. This means that i-language is a productive (probably infinite) system. There are no limits on what we can talk about. Other examples of the productivity of syntax can be seen in the fact that you can infinitely repeat adverbs (23) and you can infinitely add coordinated nouns to a noun phrase (24):

      23)

      1 a very big peanut

      2 a very very big peanut

      3 a very very very big peanut

      4 a very very very very big peanut etc.

      24)

      1 Dave left

      2 Dave and Alina left

      3 Dave, Dan, and Alina left

      4 Dave, Dan, Erin, and Alina left

      5 Dave, Dan, Erin, Jaime, and Alina left etc.

      Let’s now turn to premise (ii), the idea that infinite systems are unlearnable. In order to make this more concrete, let us consider an algebraic treatment of a linguistic example. Imagine that the task of a child is to determine the rules by which her language is constructed. Further, let’s simplify the task, and say a child simply has to match up situations in the real world with utterances she hears.9 So upon hearing the utterance the cat spots the kissing fishes, she identifies it with an appropriate situation in the context around her (as represented by the picture).

      25) “the cat spots the kissing fishes”

      Her job, then, is to correctly match up the sentence with the situation.10 More crucially she has to make sure that she does not match it up with all the other possible alternatives, such as the things going on around her (like her older brother kicking the furniture or her father making breakfast for her, etc.). This matching of situations with expressions is a kind of mathematical relation (or function) that maps sentences onto particular situations. Another way of putting it is that she has to figure out the rule(s) that decode(s) the meaning of the sentences. It turns out that this task is at least very difficult, if not impossible.

      Let’s make this even more abstract to get at the mathematics of the situation. Assign each sentence some number. This number will represent the input to the rule. Similarly, we will assign each situation a number. The function (or rule) modeling language acquisition maps from the set of sentence numbers to the set of situation numbers. Now let’s assume that the child has the following set of inputs and correctly matched situations (perhaps explicitly