Hostname: page-component-cd9895bd7-hc48f Total loading time: 0 Render date: 2024-12-26T16:10:52.580Z Has data issue: false hasContentIssue false

Usage-based approaches to language and language learning: an introduction to the special issue

Published online by Cambridge University Press:  18 July 2016

ANDREA TYLER*
Affiliation:
Georgetown University
LOURDES ORTEGA*
Affiliation:
Georgetown University
*
*Address for correspondence: Andrea Tyler, Department of Linguistics, Georgetown University. e-mail: [email protected], and Lourdes Ortega, Department of Linguistics, Georgetown University. e-mail: [email protected]
*Address for correspondence: Andrea Tyler, Department of Linguistics, Georgetown University. e-mail: [email protected], and Lourdes Ortega, Department of Linguistics, Georgetown University. e-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Type
Introduction
Copyright
Copyright © UK Cognitive Linguistics Association 2016 

The past several years have brought a steady stream of revelations in psychology, brain science, cognitive science, language learning, and linguistics that underscore the perspective that language, in all its complexity, variability, and systematicity, can largely be accounted for by the language to which humans are exposed. These findings also emphasize the role of humans’ rich, interactive cognitive capacities as key shapers of language, including their strong social proclivities crystalized in the desire to communicate. This research falls under the overarching umbrella of usage-based approaches to language. The central theme is that, given domain-general cognitive capacities, language is constructed from meaningful, contextualized exposure to language and attempts to use it. This special issue presents five papers that originated in the plenary addresses at the 2014 Georgetown University Round Table (GURT). They offer cutting-edge contributions from leaders in the field of usage-based language studies, each of whom represents a distinct perspective. Each author also uses distinct methodology to explore those perspectives. In this Introduction to the special issue, we reflect on some of the key points of overlap among the contributors, as well as note their unique contributions. Together, the papers address human cognitive sensitivity to frequency and its interaction with pattern finding, form–meaning matching, maturational development, and category formation. They help paint a rich picture that allows us to begin to account for humans’ ability to construct language.

1. Frequency

It may be obvious, but when we say that language is usage-based or that language develops out of language use, one of the key notions being alluded to is the frequency of stimuli in the input. Not surprisingly, a consistent, unifying theme found in the five papers in this special issue is the discussion of humans’ general cognitive sensitivity to the language evidence they accrue as they engage in communication. Ellis reports that research in psychology and cognitive science has established that “learning, memory, and perception are all affected by frequency of usage: the more times we experience something, the stronger our memory for it, and the more fluently it is accessed” (this volume). Moreover, the authors of these five papers argue that this sensitivity to frequency offers profound explanations concerning the nature of language and language learning.

The past fifty years have provided a plethora of studies (many from the contributors here) demonstrating that humans are sensitive to the frequencies with which they encounter linguistic information. For instance, Ellis, O’Donnell, and Römer (Reference Ellis, O’Donnell and Römer2014) present rich documentation for the role of frequency in language, speakers’ knowledge of their language, and language processing. One straightforward example is the lengthy literature documenting word frequency effects: more frequently occurring words are recognized and processed much faster than lower-frequency words. The studies also show that the frequency of co-occurrences between a verb and its arguments affect processing time. Considering the many different frequency effects, Ellis et al. argue that this implies that mature speakers implicitly know the statistics of the language input, and that these are learned from usage. Newport (this volume) notes that a number of problems in brain and cognitive sciences have been addressed through statistical approaches. She argues that “humans and animals learn … by tuning themselves to the statistics of incoming stimuli”. She further notes that earlier work has shown that “infants, young children, and adults can compute, online and with remarkable speed” many fundamental elements of language and can use this sensitivity to the statistics of the input to discover and acquire grammatical categories and even syntactic structure.

Clearly, the simple tenet that language emerges from language use is too general to take us very far in describing the intricacies and regularities found in language and language learning – both within individual languages and across the world’s languages. Indeed, the tenet sounds glib as a sole explanation to language, seeming to sidestep the fundamental issues of why languages across time and geography have many near-universal properties, and how a language developed its structure in the first place. The five papers in this volume flesh out many of the missing details.

In her account of human’s statistical learning and pattern finding capacities, Newport adds consideration of an important maturational constraint which results in younger children, with their more limited cognitive capacities, tending to simplify inconsistent data, a process that she calls ‘non-veridical learning’. Older children and adults, in contrast, seem prone to engage in veridical learning of whatever input frequencies they experience, perhaps because they have developed the cognitive flexibility to incorporate the inconsistencies into their language representations. Goldberg also focuses on younger learners, but with an eye towards the issue of them learning the gaps or exceptions to regularly occurring verb–argument patterns. Drawing on established concepts from non-linguistic categorization literature, she examines category formation in terms of the fine-grained patterns of semantic coverage in interaction with the process of statistical pre-emption. The notion of statistical pre-emption focuses on the user’s pattern recognition prowess, which fuels expectations about co-occurrence relations between constructions and lexical items, along with the user’s sensitivity to not hearing what they expect, and their cognitive abilities to fine-tune their production to the actual use of the surrounding discourse community. In combination, these two elements help explain overgeneralization patterns witnessed in child speech, as well as children’s ultimate ability to avoid the arbitrary gaps in otherwise productive syntactic patterns. At first glance, one might find that Goldberg’s findings concerning children learning the irregularities are at variance with Newport’s findings of her young subjects over regularizing. However, an important difference between the two investigations is that Newport’s learners are not engaged in natural language learning as members of a discourse community, and do not hear unexpected exemplars that point to an only partial productivity, whereas Goldberg’s learners do. Focusing on the child’s first two years, Lieven explores a different set of maturational constraints that show that the language learning process is shaped by the profound interaction of the child’s developing sociability and cognitive capacities. In particular, she examines the development of the child’s understanding of common ground, or the understanding of their own and others’ intentionality. In this work, too, a surrounding discourse community is a key explanatory element in the theory.

Ellis provides empirical evidence from a set of three experiments exploring multiple types of frequency interactions and their reflexes in language processing. A key finding is that imageablity of the verb is highly influential in the speed of recognizing the verb when it is in isolation, but not necessarily when the verb occurs in established verb argument patterns. The studies Ellis reports go a long ways towards the goal of positing a model of language that is consistent with cognitively based, usage-based understandings of language processing.

Bybee, File-Muriel, and Napoleão de Souza hone in on the roles of frequency, chunking, and function in the phonological change termed ‘special reduction’. They conclude that the phenomenon of special reduction isn’t so special – that is, it is not outside the range of general phonological processes found in the particular language evidencing special reduction – once the important factors of chunking and frequency are taken into consideration, making special reduction open to a usage-based analysis.

Thus, the set of contributions in this special issue shows that while frequency is a powerful shaper of language, it is also subject to important constraints. In doing so, it disabuses readers from concluding that usage-based approaches hold naive notions of pure frequency effects. Determining frequency in the language input is not a simple, straightforward matter of counting the number of occurrences of a particular recurring string of sounds, or a recurring lexical item or construction. As Lieven points out, while a large number of studies of child language learning around the world have found that input and frequency are closely associated with the language children learn, “not all frequencies are equal” (this volume; see also Ambridge, Kidd, Rowland, & Theakston, Reference Ambridge, Kidd, Rowland and Theakston2015, and ensuing peer commentaries). Input frequency is a complex construct open to scholarly scrutiny and further empirical elucidation. Without a doubt, however, usage-based approaches have firmly established frequency as a factor that any theory of language will have to address.

2. Form–meaning mapping

Attention to the centrality of frequency effects is perhaps often seen as the most important contribution made by usage-based linguistics to a theory of language and language learning. Here, we foreground what we are convinced is an equally important contribution, which is the central role of meaning in language and language learning. Clearly, implicitly tracking the statistics of the input is a general cognitive capacity. But a key factor that distinguishes language-related pattern finding from the general, statistical pattern sensitivity found in humans and other species, is that language patterns involve meaning. These forms, be they morphemes or words or sentence patterns, all have meaning. The work in the five papers provides a continuum of commitment to the perspective that language crucially involves form–meaning mapping.

Newport, the most reticent in advocating for the role of meaning and form–meaning mapping in the special issue, notes that previous research demonstrates that learners are sensitive to “how frequently words occur in similar contexts … and can utilize these statistics to find candidate words in a speech stream, discover grammatical categories, and acquire simple syntactic structure” (this volume). Although she does not directly speak to form–meaning mapping, what seems important to acknowledge is that the moment a theory or a study moves from pattern recognition of strings of sounds to recognition of words or morphemes, we have added meaning. And when what is implicitly tallied is the occurrence of words and morphemes in similar contexts, meaning is by necessity catapulted to the foreground. Namely, the recognition of word categories goes far beyond recognizing the statistics of recurring strings of sounds and transitional probabilities. Verbs and nouns come in many phonological forms and can only be categorized as ‘verbs’ or ‘nouns’ if the learner understands something about the meaning of the form and the role it plays in the context of utterances in which it occurs. Recognition of grammatical categories (and how various word categories work with morphology and in sentences) requires, although at more abstract level than recognition of individual words, recognition of meaning, with nouns broadly representing entities or things and verbs broadly representing processes that unfold through time (Langacker, Reference Langacker1987; Taylor, Reference Taylor2002). In the studies using mini-artificial languages, Newport reports here that when nonce morphemes regularly mark agentive subjects or non-animate objects (in other words when the potential scene being depicted reflects the semantics of a typical transitive sentence), study participants learned the morphology quickly and accurately. These learners appear to be using their knowledge of meaning associated with particular, simple sentence structure, the types of participants and the roles they play in certain activities, in order to learn the unfamiliar morphology. In a second study, in which Newport created what she refers to as somewhat unnatural languages (i.e., languages that had variations in the animacy of its nouns in object position), learners more frequently focused on the case markers with an unexpected animate noun in object position than when the object was a more expected inanimate. We clearly see that the semantic information of non-animacy being associated with the grammatical object, that is, the participant being acted upon, affects learning and processing of these mini-artificial languages.

Bybee et al.’s analysis of special reduction in phonology might initially appear little concerned about the role of meaning. However, if special reduction crucially rests on speakers’ sensitivity to words and phrase-level chunks of speech, it follows that speakers are sensitive to chunks of speech that have form–meaning bonds. Indeed, Bybee et al. argue that their investigations of special reduction show “that phonetic change affects words and phrases at different rates, depending upon how often the word or phrase occurs in the contexts that favor change, including not just the phonetic context, but the functional, lexical and grammatical context as well” (this volume, our emphasis).

Goldberg, Lieven, and Ellis are strongly committed to the central role of meaning as a driver of language and to the position that form and meaning are inextricably connected. Goldberg argues that a language learner’s goals are to understand the language she hears, which is always packaged in particular forms (i.e., word, morphemes, and recurring sentence patterns), and to attempt to have her listener understand her intentions, which requires choosing particular forms she believes are most likely to connect with the listener. Thus, the forms the learner is confronted with and produces are inherently tied to making meaning. As Goldberg notes, “[I]t is clear that speakers must learn the ways in which forms and functions are paired in the languages they speak” (this volume). For Goldberg, and all who follow her theory of construction grammar, recurring syntactic patterns, or Verb Argument Constructions (VACs), are inherently linked to meaning. The grammatical templates represented by VACs are argued to be meaningful, articulating basic humanly experienced scenes or activities, such as the transfer of an object from a giver to a recipient.

Lieven argues that research over the past decade, in particular, suggests that “form–meaning mappings begin to be established in infancy and become attached to emergent pattern identification” (this volume). She rejects the hypothesis that syntactic development is ever an encapsulated process, somehow separated from meaning. Rather, she presents robust evidence that, “[i]n the first year of life, there are many developments in infant speech perception, cognition and communication which come together in a range of intention-reading behaviours in the last trimester of the first year” (this volume). Indeed, intention-reading and the establishment of common ground are key to development – linguistic, social, and cognitive – and by definition meaning driven.

Focusing on adult language, Ellis’s experiments examine the interactions among verb frequency, prototypicality of the verb meaning, and established VACs. He capitalizes on previous research which emphasizes that language is “pervaded by collocations and phraseological patterns … and that language constructions are motivated by semantics and communicative functions” and embraces the position that “[l]exis, syntax, and semantics are inseparable” (this volume). Adult speakers know a great deal about VACs, as revealed by their clear sensitivity to the constructions’ frequencies. In his current set of studies, Ellis seeks to explore whether constructions represent ad hoc categories created on the fly or if they are entrenched in memory. In other words, he asks: Are these constructions psychologically real and symbolically stable? Ellis’s results offer a compelling affirmative answer. In this way, Ellis interprets the evidence as lending strong support to the notion of grammar as a mentally represented, unified ‘constructicon’ that emerges from a learner’s lifelong statistics of usage. The constructicon is meaningful, comprised of form–meaning mappings at the level of both words and VACs.

3. Context as constraint

The tenet of meaning being a central element in usage-based approaches raises the importance of context as a constraining factor on strict (or naive) frequency. Human frequency processing capacity is crucially sensitive to complex interactions between individual form–function mappings and the surrounding environment in which they occur. If all language is meaningful, language forms do not occur in isolation but always within a context, which can be understood at various levels of granularity, from the surrounding linguistic environment to the discourse context to the wider social context.

As we noted above, humans implicitly keep track of the occurrences of lexical stems within various complex morphological environments, such as recognition of a verb stem and its various inflected forms. For instance, it is now well established that humans are sensitive to the frequency with they encounter a particular verb stem in, say, the simple past tense versus simple present tense or progressive aspect (Blevins, Reference Blevins, Goldrick, Ferreira and Miozzo2014; Hay, Reference Hay2001). Just as word stems do not occur in isolation, stripped of their morphological environment, full lexical items do not generally occur in isolation in natural speech. In their work on special reduction in phonology, and the role of frequency in interaction with phonetics and articulatory factors, Bybee et al. (this volume) report that speakers are sensitive to the co-occurrence patterns of certain subject pronouns in certain verb phrases, such as I don’t know; one consequence of this sensitivity is the tendency for the much more frequently occurring I don’t know to evidence greater phonological reduction, in contrast to the less frequent You don’t know. They argue that cases of ‘extreme phonological reduction’, which have often been assumed to be unusual and outside more core phonological processes, in fact follow the same phonological trends and employ the same processing and cognitive mechanisms that are occurring throughout the phonological system (or normal phonetics). Importantly, Bybee et al. establish that extreme reduction often takes place in phrases or chunks of speech that are high in frequency. Special reduction is further evidence for the hypothesis that phonetic change affects words and phrases at different rates, depending upon how often the word or phrase occurs. Moreover, phrases that occur with high frequency have often taken on special functions, such as greetings and discourse markers; once the special function (a new semantic pole) has been established, the form occurs even more frequently. With higher levels of frequency, phonetic reduction accelerates. They conclude that the context for change always potentially includes not just the phonetic context, but the functional, lexical, and grammatical context as well.

A major focus of the current research considers the frequent co-occurrence patterns between verbs and their argument structures. With respect to VACs, specifically, Ellis presents compelling evidence that these co-occurrence patterns affect both recognition and semantic processing speed. Moreover, the ability for us to detect VACs and for human processing speed to be affected by VACs is clear evidence of pattern finding abilities involving the contexts in which verbs occur. We notice and implicitly keep track of what types of subjects, objects, and adjunct phrases (for instance, locative prepositional phrases) particular verbs are likely to co-occur with. Additionally, meaning judgment is affected by VAC-verb contingency; that is, how likely it is that a particular verb will occur in a particular VAC and therefore serve as a strong cue for the meaning of the entire sentence.

Moving beyond the strict sentential context, Goldberg argues that the discourse-level concept of topic or focus is central to understanding the participant roles found in constructions. Furthermore, differences in how notions like focus (Goldberg, Reference Goldberg1995) are mapped in a particular construction offer a convincing account of why speakers choose between seemingly competing constructions. Each construction has a distinct (discourse) function and appears in a distinct discourse environment. Consider the double object construction (Subj V Obj Obj she gave me the ball) versus the to-dative construction (Subj V Obj to Obj she gave the ball to me). By Goldberg’s analysis, in the double-object construction, the direct object (first noun following the verb) is the focus element; in contrast, in the to-dative construction, the object of the preposition to is in focus: “The difference between the double-object and to-dative constructions is subject to some dialect differences and gradability, yet it is possible to predict with high probability which construction will be preferred in a given context, for a given dialect” (this volume).

4. Some special constraints on categories

Psychology has provided us with overwhelming evidence that human memory is richly patterned and organized in categories which can show multiple effects, such as prototype effects and more hierarchically organized schemas (Varela, Thompson, & Rosch, Reference Varela, Thompson and Rosch1991). Categories are constructed from exposure to individual exemplars which humans conceptualize as being similar and therefore belonging in the same grouping. Importantly, categories are dynamic and flexible, allowing new exemplars to be constantly added and new connections among the exemplars to be reconfigured. Established categories also guide our interpretation of new information which will either be added to an already established category or to begin to establish a new category. Usage-based linguists argue that language is no different than other stimuli in showing category effects.

Bybee et al. provide a compelling discussion of how phonetic and phonological phenomena follow exemplar-based category formation, and how dynamic sound categories interact with frequency. Their discussion begins with the well-established fact the what we hear in spoken language and perceive as a meaningful sound (i.e., a phoneme, word, or phrase) is a cluster of exemplars with many phonetic variants: “[T]he cognitive representation of the phonetic shape of words and phrases is a cluster of phonetic exemplars, organized by their similarity to one another” (Bybee et al., this volume). Indeed, the evidence points to the conclusion that the same phoneme is never articulated exactly the same way twice. Thus, in the course of articulation, new phonetic tokens are constantly being added to the phoneme category. These new tokens, in turn, affect the cognitive representation of the form of words and phrases and essentially create new exemplars of those words and phrases. Crucially, articulatory production biases towards reduction and retiming mean that virtually every time a word or phrase is used can potentially result in change to the phonetic shape of the word or phrase. Thus, recognizing dynamic, exemplar based, sound categories naturally explains “the empirical finding that higher frequency words and phrases change more rapidly when sound change is taking place, because the more a word or phrase is used, the more it is subject to production biases” (Bybee et al., this volume).

As Ellis reminds us in his contribution, categories also demonstrate contingency affects. A classic issue is the question of how effective a cue is to recognizing the category, and the example that while both eyes and wings are equally frequent when encountering birds, wings are a much more distinctive, reliable cue to class membership (Shanks, Reference Shanks1995). In addition, the evidence shows that categories are interactive and inter-related so that the same exemplars might belong to multiple categories, depending on the contexts in which they are being used. Ellis, among others, argues that this inter-connectedness is the basis for the phenomenon of spreading activation, whose end result appears to boost the sensitivity of central exemplars of categories. If this is so, inter-connectedness modifies the veridical frequencies of exemplars we encounter. Cognitive Linguistics argues for interactive schemas (e.g., Taylor, Reference Taylor2002), and spreading activation of inter-connected categories lends empirical support to and helps explain their theoretical claim.

5. Looking forward

We opened our Introduction with the promise to reflect on the key threads among the five contributions in the special issue while also highlighting what is unique in each of their ways of thinking about the usage-based study of language. The five featured positions agree that frequency is central to language and language learning and at the same time a deeply complex notion that forces us all to go well beyond simple explanations invoking frequent, iterative usage. Our contributors exhibit a ranging degree of commitment to the tenet that human language, its patterns, and its learning, are shaped at a fundamental, crucial level by meaning. Motivated by our own conviction that all language is meaningful, and that natural language forms do not occur in isolation but always within a context, we looked for ways in which each contribution accords context a constraining role in explanations of the make-up of language and the processes of language learning. The five papers clearly show that these constraints can be understood best when context is considered at various levels of granularity, from the surrounding phonetic and sentential environment, to discourse-level context, to wider social contexts entailing surrounding discourse communities. Finally, all five contributions show how humans’ ability to construct language cannot be fully understood unless we take into account constraints on abstraction of categories, that is, on how humans handle new information by either adding it to already established categories or beginning to establish a new category.

We would like to close our Introduction with a mention of two additional research spaces for the usage-based study of language and language learning whose potential remains open to future exploration. One is what Zima and Brône (Reference Zima and Brône2015) have called an interactional – and, we would add, multimodal – turn in usage-based linguistic approaches. Although the papers presented in the special issue of Language and Cognition edited by Zima and Brône focused on first language issues, some researchers in second language acquisition have also begun to pursue the joint concerns of usage-based constructionism and discourse analysts, notably Cadierno and Eskildsen (Reference Cadierno and Eskildsen2015). We agree with all these scholars that much more attention to the in-discourse, situated construction of language will be warranted in the future, given that actual, embodied, and multimodal human communication events are the backbone of what is meant by ‘usage’ in usage-based linguistics. To Zima and Brône’s desideratum for an interactional turn, we would like to add a call for future empirical investigation and theoretical elucidation of multilingual usage. Multilingualism is the natural state of human language in the world. Precise demographic evidence for this assertion is difficult to muster, but all estimates are equally compelling. For example, a conservative count in the Ethnologue lists 505 million speakers of English as an additional language in our planet, all of whom are by definition bilinguals or multilinguals (https://www.ethnologue.com/language/eng). A more liberal calculation of around 850 million is found in Wikipedia (https://en.wikipedia.org/wiki/List_of_countries_by_English-speaking_population). And many of the 350 million native speakers of English that Wikipedia also lists will speak other languages and thus may well be bi/multilingual themselves. The inevitable conclusion is that much of the current world’s language usage involves speakers who know more than one language and thus have their languages interact (always) in their minds and (very often) in their usage (Cook & Li Wei, Reference Cook and Li2016). This is true whether we look to language usage recorded in social media and big data or to interpersonal encounters fleetingly taking place all around us. We have come to accept the focus on one language at a time as normative in our usage-based studies, but a focus on these pervasive, highly frequent multilingual usage events will soon be necessary. To be sure, multilingual usage events will greatly complicate the study of key constructs like frequency, form–function pattern finding, mechanisms of categorization, abstraction, and spreading activation of interactive schemas. But the new focus will also greatly boost the validity and relevance of the evidence we are able to garner in order to continue honing our usage-based models of language and language learning.

In the meantime, and hopeful that an interactional turn and a multilingual turn are next on the horizon, we offer to readers the five papers in this special issue with excitement. Taken together, they provide a nuanced, multi-faceted look at the complex interactions among frequency, meaning, and a range of cognitive capacities that are foundational to understanding how human language can emerge through usage.

References

references

Ambridge, B., Kidd, E., Rowland, C. F., & Theakston, A. L. (2015). The ubiquity of frequency effects in first language acquisition. Journal of Child Language, 42, 239273.CrossRefGoogle ScholarPubMed
Blevins, J. P. (2014). The morphology of words. In Goldrick, M., Ferreira, V., & Miozzo, M. (Eds.), The Oxford handbook of language production (pp. 152164). Oxford: Oxford University Press.Google Scholar
Cadierno, T., & Eskildsen, S. W. (Eds.) (2015). Usage-based perspectives on second language learning. Berlin: Walter de Gruyter.Google Scholar
Cook, V., & Li, Wei (Eds.) (2016). The Cambridge handbook of linguistic multicompetence. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Ellis, N. C., O’Donnell, M. B., & Römer, U. (2014). Argument constructions is sensitive to form, function, frequency, contingency, and prototypicality. Cognitive Linguistics, 25, 5598.CrossRefGoogle Scholar
Goldberg, A. E. (1995). Constructions: a construction grammar approach to argument structure. Chicago, IL: University of Chicago Press.Google Scholar
Hay, J. (2001). Lexical frequency in morphology: Is everything relative? Linguistics, 39, 10411070.CrossRefGoogle Scholar
Langacker, R. (1987). Foundations of grammar. Stanford, CA: Stanford University Press.Google Scholar
Shanks, D. R. (1995). The psychology of associative learning. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Taylor, J. (2002). Cognitive grammar. Oxford: Oxford University Press.CrossRefGoogle Scholar
Varela, J., Thompson, E., & Rosch, E. (1991). The embodied mind: cognitive science and human experience. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Zima, E., & Brône, G. (Guest Eds.) (2015). Special issue on cognitive linguistics and interactional discourse. Language and Cognition, 7, 485590.Google Scholar