Neurolinguistics of bilingualism and the teaching of languages
Michel Paradis
Department of linguistics, McGill University and Cognitive Neuroscience Centre, UQAM
Neurolinguists
and language pathologists have traditionally concerned themselves with the
language system, what some linguists now call implicit linguistic competence, or
the grammar, which is what is typically impaired subsequent to lesions in the
perisylvian classical language areas of the left cerebral hemisphere. This
language system is also mainly what is taught in second language classes. But
recently attention has been increasingly drawn to the fact that language, so
defined, was only one component of verbal communication. Verbal communication is
multimodal (i.e., involves different sensory modalities) and multimodular (i.e.,
each modality is comprised of a number of neurofunctional modules). At least
four neurofunctionally-modular cerebral mechanisms are involved in the
acquisition and use of language, first or second, subserving respectively
implicit linguistic competence, metalinguistic knowledge, pragmatics, and
motivation.
Linguistic
competence is acquired incidentally, is stored implicitly, is used
automatically, and is subserved by procedural memory. It is acquired
incidentally in that acquirers focus their attention on something other than
what is internalized, such as focusing on acoustic properties of sounds while
internalizing the proprioception that allows them to perform articulatory
movements; or on semantic and pragmatic aspects of an utterance while
internalizing its morphosyntax. It is stored implicitly in that speakers are not
conscious of the computational procedures that generate the sentences they
produce and the underlying structure of these sentences remains forever opaque
to introspection. The grammars that linguists attempt to construct are systems
inferred from the systematic verbal behavior of speakers, but they have no way
to know whether these constructs remotely resemble the actual computational
procedures activated to generate sentences. Linguistic competence is used
automatically in that it is not under conscious control: speakers could not
control something of which they were not aware. It is subserved by procedural
memory, as are all implicit skills. Procedural memory is task specific.
Procedural memory for language relies on the integrity of the cerebellum, the
striatum and other basal ganglia, and on circumscribed areas of the left
perisylvian cortical region.
Metalinguistic
knowledge is learned consciously, is stored explicitly, is used in a controlled
manner, and is subserved by declarative memory. It is learned consciously, by
noticing the items learned. It is stored explicitly, in that one can bring to
consciousness the items that one knows. It can be used in a controlled manner,
in that one can consciously apply a set of memorized grammatical rules, for
example. It is subserved by declarative memory. Declarative memory relies on the
integrity of the hippocampal system, the medial temporal lobes, and large areas
of tertiary cortex, bilaterally.
Empirical
evidence for these two types of memory comes from multiple double dissociations
in a number of pathological conditions. Individuals with amnesia, for instance,
lose access to declarative memory, while retaining all skills that rely on
procedural memory. Individuals with aphasia lose access to implicit linguistic
competence but not declarative memory. As a result, some aphasic patients seem
to recover their less proficient L2 better than their L1, and some amnesic
patients lose access to their L2. Individuals with Parkinson’s Disease have
their implicit linguistic competence impaired; individuals with Alzheimer’s
Disease have explicit knowledge impaired (Sagar et al, 1988; Lieberman et al.,
1992; Ullman et al., 1997). In fact, Loss of L2 has been reported in
Alzheimer’s Disease, a condition in which declarative memory is first
affected. (Hyltenstam & Strout, 1993).
Whereas
implicit aspects of language structure, such as phonology and morphosyntax, are
subserved by procedural memory, the sounds and lexical meanings of words are
consciously known. In fact, vocabulary stands apart from the rest of language
structure in several ways: chimpanzees and gorillas are able to learn a large
number of signed words; children deprived of language input during infancy can
eventually also learn numerous words but little morphosyntax; the idiot savant
reported by Smith & Tsimpli (1995) had little implicit linguistic competence
in his native language but was able to learn large vocabularies in several
languages (as well as metalinguistic facts about the languages); and individuals
with anterograde amnesia, while capable of acquiring new motor and cognitive
skills, are incapable of learning new words;
As mentioned, Alzheimer patients retain functions subserved by (implicit)
procedural memory long after they have lost access to the functions subserved by
(explicit) declarative memory—including vocabulary.
It
is important to realize that implicit linguistic competence and metalinguistic
knowledge are of a different nature, bear on different objects, and rely on very
different cerebral structures, and that, therefore, metalinguistic knowledge
never becomes implicit competence, or the other way around. Both develop
independently. What may happen in the course of second language development is
that the initially almost exclusive use of metalinguistic knowledge may be
gradually replaced by an increasing use of linguistic competence (Paradis,
2000). But metalinguistic knowledge is not transformed or converted into
implicit linguistic competence: it remains available.
Along
with implicit linguistic competence and metalinguistic knowledge, two additional
cerebral systems are involved in processing verbal communication: Those which
subserve linguistic pragmatics and motivation. Indeed, in addition to the
interpretation of the literal meaning of sentences, a discourse grammar,
including rules of presuppositions and inference, and in general any
extra-sentential context-dependent phenomena, is required. Sociolinguistic
rules, which determine the appropriate choice among the various possible
structures available in linguistic competence, are equally necessary. So is
paralinguistic competence, comprising the comprehension and use of intonation,
gestures, facial expressions, and anything that serves to specify the meaning of
the sentence—such as whether it is meant as a sarcastic remark or a
compliment, an indirect request or a factual question, whether it is to be taken
with a figurative, metaphoric, idiomatic meaning or at face value. In fact, we
may estimate that more than half of what we say is not literally what we
mean—at least not entirely. Most of the time, we mean more than what we say,
not mentioning implicatures, or something different than what we actually say,
as in metaphors, idiomatic expressions, and indirect speech acts, or even the
opposite of what we say, as in irony and sarcasm (Paradis, 1998).
In
the literature on linguistics and the pathology of communication, there are at
least two clearly distinct domains subsumed under the term “pragmatics”,
discourse structure and nonliteral meanings. Both domains have been reported to
be vulnerable to right hemisphere damage while relatively preserved in the
context of dysphasia (Pierce & Wagner, 1985). The common denominator seems
to be the necessity to rely on context and general knowledge in order to derive
an interpretation. This context can be situational (including paralinguistic
cues), but also discursive (including structure and contents of discourse, as
well as turn-taking and the like, from which inferences and implications must be
made).
Over the past
century, damage to specific areas of the left cerebral hemisphere has been
reported to disrupt the comprehension and/or production of various aspects of
phonology, morphology, syntax, and the lexicon. Clear deficits of a different
nature, affecting the comprehension and production of humour, affect, and
various aspects of the nonliteral interpretation of utterances, have been
reported over the past 20 years or so (see Paradis, 1998 for a review). Deficits
secondary to right-hemisphere damage typically involve those aspects of language
use other than the ones involved in the literal interpretation of
(context-independent) sentences. More specifically, patients with
right-hemisphere damage have been variously shown to be insensitive to
connotative meaning, figurative speech, even when supportive contextual cues are
available, metaphors, the emotive meaning of words, emotions that have to be
inferred from context, and indirect speech acts. Many have been reported not to
be able to use prosody to interpret (or convey) emotional content. They also
fail to understand the moral, punchline, theme, or main point of a story and
have problems in the organization of discourse. Overall, these patients seem to
have difficulty in using contextual information to interpret discourse.
Nevertheless,
the role of the right hemisphere in language processing has not been
investigated as thoroughly as that of the left hemisphere. Thus, whereas
deficits in implicit linguistic competence have long been commonly referred to
as aphasia (or, more etymologically accurate, and still current in
British English, dysphasia) there was no label to refer to impairments in
the ability to infer what is meant from the contexts in which something is said.
I have therefore proposed to use the term dyshyponoia, from the Greek uJponowv
(to grasp what is “understood” (in the sense of the French sous-entendu),
albeit unsaid in an utterance) to refer to a difficulty in drawing appropriate
inferences from extra-sentential information, leading to problems in the
interpretation of the unspoken component of an utterance, i.e., its
illocutionary force or pragmatic component, with preserved comprehension of the
literal meaning of a sentence, i.e., its semantics, derivable solely from the
lexical meaning of words and morphosyntactic structure of the sentence (Paradis,
in press).
Last,
but not least (Damasio, 1994, 1999), the cerebral system underlying emotion also
plays an important, if neglected, role. The structures of the limbic system that
subserve emotions, drives and desires, are phylogenetically and ontogenetically
anterior to the development of higher cognitive systems. Twenty-five years ago,
Lamendella (1977) suggested that implicit linguistic competence was integrated
within the limbic system (involving the striatum and amygdala), whereas
metalinguistic knowledge was not. Schumann has repeatedly emphasized the
importance of motivation, and in particular the role of the amygdala and the
dopaminergic system in the acquisition of language (Schumann, 1990, 1994, 1998).
One important difference between first and second language appropriation is that
the first phase of the microgenesis of an utterance, namely the desire to
communicate a message, is mostly missing in the learning of an L2 in a school
environment (Paradis, 1992), resulting in a lack of dopamine release (Schumann,
1998). Evidence of the impact of motivation on language processing comes from
dynamic aphasia, when patients with spared language representations nevertheless
remain speechless unless persistently prompted, and the reverse, when global
aphasics, who are unable to put two words together, manage to blurt out a
relatively complex utterance when strongly annoyed or otherwise emotionally
aroused. The impact of motivation on L2 learning has also been well documented,
showing that both instrumental and integrative motivation have a facilitating
effect on L2 appropriation, and that the impact of integrative motivation is
stronger (Lambert, 1969), probably because it also encourages more extensive
practice.
This
leads us to another component of the cerebral system underlying communicative
competence, which is not associated with a particular anatomical area or
functional system, but is associated with all higher cognitive functions,
irrespective of their domain, namely, the activation threshold. The neural
substrate of any mental representation requires a certain amount of impulses to
reach activation. Each time an item (word, morphosyntactic construction, or
whatever) is used, its activation threshold is lowered, making it easier to
activate again; but it slowly raises when inactive, as reflected in recency,
frequency and practice effects, and in attrition.
Practice
in communicative environments is what internalizes the grammar. That is,
repeated exposure to, and use of, sentences of a certain type, lead to the
acquisition of the implicit computational procedures that eventually allow for
the automatic comprehension and production of sentences of this form. (These
procedures may be algorithms such as the generative grammars constructed by
linguists, or frequency-of-co-occurence-based relational networks, as described
by connectionist psychologists, or articficial intelligence neural netwwok
constructs with weighted connections. But to date, there is no criterion to
decide which type of grammar or parallel distributed processing is more
compatible with the way the brain actually processes information.) Strong
motivation may have the same effect as practice by lowering the activation
threshold.
To
conclude, verbal communication involves different modalities (aural, oral,
visual, gestural) and each modality is comprised of a number of independent but
collaborating neurofunctional modules. At least four cerebral mechanisms are
involved in the acquisition and use of language: Implicit linguistic competence,
metalinguistic knowledge, pragmatics, and motivation. Unilingual and bilingual
speakers alike rely on all four systems. However, to the extent that L2 speakers
have gaps in their implicit linguistic competence, they will compensate by
relying more extensively on metalinguistic knowledge and pragmatic aspects of
verbal communication, both in speaking and understanding. Their motivation will
modulate the efficiency of the various cerebral systems involved. Since
different cerebral structures underlie each of the four systems, pathology may
selectively affect L1 or L2 depending
on which cerebral structures are affected, so that some aphasic patients
paradoxically recover their premorbidly weaker L2 better than their L1, and
patients with Alzheimer’s Disease lose access to their L2 before their L1.
To the extent that a
language is acquired incidentally, it will be represented as automatically
usable implicit competence (and the more so, the younger the individual); to the
extent that it is learned formally, it will be represented as metalinguistic
knowledge usable in a controlled manner (and the more so, the older the
individual). Therefore, the more formal the learning method, the more the second
language will rely on declarative memory; the more communicative the method, the
more the second language will rely on procedural memory. Some second language
speakers are able to speed-up their control over production (Favreau &
Segalowitz, 1983; Segalowitz,1986;
Segalowitz, 2000; Segalowitz & Segalowitz,1993; Segalowitz., Segalowitz,
& Wood, 1998) to the extent that they can fool most of the people much of
the time into believing that they are native speakers of the languages. But
stress, fatigue, aging, clever psychological experiments or one too many
martinis will soon expose them and show that their use of L2 is controlled to a
much greater extent than their L1—even though possibly considerably
speeded-up.
This
does not mean that metalinguistic knowledge is not useful. There are two ways of
speaking: The controlled way and the automatic way. Individuals with genetic
dysphasia, autism and Down Syndrome rely to a great extent on the controlled use
of metalinguistic knowledge, as do many incipient second language learners.
Since adults tend to use declarative memory (metalinguistic knowledge) anyway,
it may be useful to provide them with an explicit description of particular
grammatical phenomena. This may, in turn, help them not only to construct
correct sentences (in a controlled way), but also to monitor and self-correct
their production (the output of their implicit competence) and, by thus
practising the correct forms a number of times, may contribute towards
eventually internalizing them (in whatever form the implicit underlying
computational procedures may take).
Damasio,
A.R. (1994). Descartes’ error. New York: Avon Books.
Damasio,
A.R. (1999). The feeling of what happens. New York: Harcourt Brace.
Favreau,
M., & Segalowitz, S. (1983). Automatic and controlled processes in
reading a second language. Memory and Cognition, 11: 565-574.
Hyltenstam,
K. & Stroud, C. (1993). Second language regression in Alzheimer's
dementia. In K. Hyltenstam & Å Viberg (eds.), Progression and
regression in language (pp. 222-242). Cambridge: Cambridge University
Press.
Lambert,
W. (1969). Psychological aspects of motivation in language learning. The
Bulletin of the Illinois Foreign Language Teachers Association, May,
5-11.
Lieberman,
P., Kako, A., Friedman, J., & Jiminez, E.B. (1992). Speech production,
syntax comprehension, and cognitive deficits in Parkinson’s Disease. Brain
and Language, 43: 169-189.
Paradis,
M. (1992) Neurosciences et apprentissage des langues. In D. Girard
(ed.), Enseignement des langues dans le monde d'aujourd'hui.
Paris: Hachette, pp.21-31.
Paradis,
M. (1998). The other side of language: Pragmatic competence. Journal of
Neurolinguistics, 11: 1-10.
Paradis, M. (2000). Awareness of observable input and
output—not of linguistic competence. Paper read at the International
Symposium on Language Awareness, University of Odense, Denmark. 28 April.
Paradis, M. (in press). The division of labor in verbal
communication. In J. Verschueren, J-O. Östman, J. Blommaert, & C.
Bulcaen (eds.), Handbook of Pragmatics. Amsterdam: John Benjamins.
Sagar,
H.J., Cohen, N.J., Sullivan, E.V., Corkin, S., & Growdon, J.H. (1988).
Remote memory function in Alzheimer’s Disease and Parkinson’s Disease. Brain
111: 185-206;
Schumann, J. (1990). The role of the amygdala as a
mediator of affect and cognition in second language acquisition. Georgetown
University Round Table on Languages and Linguistics, 169-176.
Schumann, J. (1994). Emotion and cognition in second
language acquisition. SSLA, 16: 231-242.
Schumann,
J. (1998). The neurobiology of affect in language. (Language
Learning Monograph Series) University of michigan: Blackwell Publishers.
Segalowitz, N. (1986). Skilled reading in the second language. In J.
Vaid (ed.), Language processing in bilinguals: Psycholinguistic and
neuropsychological perspectives (pp. 3-19). Hillsdale, NJ: LEA
Segalowitz, N. (2000). Automaticity and attentional skill in fluent
performance. In H. Riggenbach (ed.), Perspectives on fluency (pp.
200-219). Ann Arbor, MI: University of michigan Press.
Segalowitz, N. & Segalowitz, S. (1993). Skilled performance,
practice, and the differentiation of speed-up from automatization effects:
Evidence from second language word recognition. Applied Psycholinguistics,
14: 369-385.
Segalowitz,
S., Segalowitz, N., & Wood, A. (1998). Assessing the development of
automaticity in second language word recognition. Applied
Psycholinguistics, 19: 53-67.
Smith,
N., & Tsimpli, I.-M. (1995). The mind of a savant. Language learning
and modularity. Oxford: Blackwell.
Ullman, M., Corkin, S.,
Coppola, M., Hickok, G., & Pinker, S. (1997) A neural dissociation
within language: Evidence that the mental dictionary is part of declarative
Memory, and that grammatical rules are processed by the procedural system. Journal
of Cognitive Neuroscience, 9: 266-276.