Hostname: page-component-78c5997874-4rdpn Total loading time: 0 Render date: 2024-11-14T03:25:45.585Z Has data issue: false hasContentIssue false

L1 referential features influence pronoun reading in L2 for deaf, ASL–English bilinguals

Published online by Cambridge University Press:  10 February 2023

Katherine Sendek*
Affiliation:
University of California, Davis, USA
David P. Corina
Affiliation:
University of California, Davis, USA
Deborah Cates
Affiliation:
Iowa School for the Deaf, Council Bluffs, USA
Matthew J. Traxler
Affiliation:
University of California, Davis, USA
Tamara Y. Swaab
Affiliation:
University of California, Davis, USA
*
Author for correspondence: Katherine Sendek, E-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Referential processing relies on similar cognitive functions across languages – in particular, working memory. However, this has only been investigated in spoken languages with highly similar referential systems. In contrast to spoken languages, American Sign Language (ASL) uses a spatial referential system. It is unknown whether the referential system of ASL (L1) impacts referential processing in English (L2). This cross-language impact may be of particular importance for deaf, bimodal bilinguals who sign in ASL and read in English. Self-paced reading times of pronouns in English texts were compared between ASL–English bimodal bilinguals and Chinese–English unimodal bilinguals. The results showed that L1 referential characteristics influenced pronoun reading time in L2. Furthermore, in contrast to Chinese–English bilinguals, ASL–English bilinguals’ referential processing during reading of English texts relied on vocabulary knowledge – not working memory. These findings emphasize the need to expand current theories of referential processing to include more diverse types of language transfer.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Introduction

Languages are interdependent in the bilingual mind. The concept of language transfer, or cross-linguistic influence, has been well studied in multilinguals (see Jarvis & Pavlenko, Reference Jarvis and Pavlenko2008 for review). This cross-linguistic connection shows that first language (L1) knowledge can influence processing of a second language (L2). This can extend to reading in L2 (Karimi, Reference Karimi2015; Upton & Lee-Thompson, Reference Upton and Lee-Thompson2001), such that characteristics from L1 influence processing of L2 texts. Influence of L1 characteristics have been shown across language modalities as well. Signers of American Sign Language (ASL) transfer L1 features to English (L2) writing (Wolbers, Bowers, Dostal & Graham, Reference Wolbers, Bowers, Dostal and Graham2014). It has further been shown that ASL signers coactivate sign equivalents during English word presentation (Lee, Meade, Midgley, Holcomb & Emmorey, Reference Lee, Meade, Midgley, Holcomb and Emmorey2019; Morford, Wilkinson, Villwock, Piñar & Kroll, Reference Morford, Wilkinson, Villwock, Piñar and Kroll2011). The coactivation of signs and spoken equivalents has been replicated across multiple languages (Kubus, Villwock, Morford & Rathmann, Reference Kubus, Villwock, Morford and Rathmann2015; Villameriel, Dias, Costello & Carreiras, Reference Villameriel, Dias, Costello and Carreiras2016). Additionally, hearing bimodal bilinguals’ capacity for simultaneous production of both signed and spoken language shows that these languages can coactivate successfully (Emmorey, Borinstein, Thompson & Gollan, Reference Emmorey, Borinstein, Thompson and Gollan2008; Emmorey, Li, Petrich & Gollan, Reference Emmorey, Li, Petrich and Gollan2020; Emmorey, Petrich & Gollan, Reference Emmorey, Petrich and Gollan2012). However, we do not yet know whether cross modal language transfer can influence specific aspects of referential processing during reading of texts in L2. This was examined in the present study by comparing referential processing in unimodal hearing Chinese–English bilinguals and bimodal deaf ASL–English bilinguals.

Referential processing is well studied as an element within discourse (Almor & Nair, Reference Almor and Nair2007; Gordon, Grosz & Gilliom, Reference Gordon, Grosz and Gilliom1993; see Arnold, Reference Arnold2010 for review). Components of interest within discourse include the referent – used to indicate or establish people or objects – and the anaphor – used to reestablish the referent. Referential processing links the anaphor to the associated referent, allowing comprehenders to track entities as they progress through the discourse. Anaphora and the reestablishment of the referent (anaphor resolution) are of interest in the current study. There are two different forms that reestablishment may take. Noun phrases, in the form of names and descriptors, can add to the understanding of an antecedent by adding previously unknown elements. Pronouns, in contrast, reestablish an antecedent, but do not contribute any new information about the referent (Almor, Reference Almor1999). Example (1) below shows how the use of pronouns (anaphors), such as he and his, can establish action and possession by Karim (the referent) without having to restate the name Karim.

  1. (1) Karim stopped. He put his notebook away.

  2. (2) Karim stopped, reluctantly heaving a heavy sigh of regret. He put his notebook away.

  3. (3 Karim stopped, reluctantly heaving a heavy sigh of regret. The world continued to pass by the table as the coffee continued to evaporate. Not a car nor person slowed. Time ticked on, as it always did. The warmth of the sun caressed the top of the striped umbrella as hazel eyes squinted to get a look at the flavorless cotton candy that danced in the jetstreams. So much to note, with so little motivation. The whistles of the wind howled for all to hear, but when a car horn blew, the hair on pale arms rose. Now annoyed, hazel rolled. Time to go. He put his notebook away.

  4. (4) Karim stopped, reluctantly heaving a heavy sigh of regret. The world continued to pass by the table as the coffee continued to evaporate. Not a car nor person slowed. Time ticked on, as it always did. The warmth of the sun caressed the top of the striped umbrella as hazel eyes squinted to get a look at the flavorless cotton candy that danced in the jetstreams. So much to note, with so little motivation. The whistles of the wind howled for all to hear, but when a car horn blew, the hair on pale arms rose. Now annoyed, hazel rolled. Time to go. Karim put his notebook away.

While pronouns are useful, the comprehension and appropriate attributions of pronouns can be complicated by a number of factors within the discourse. One of these factors is semantic distance, or the number of intervening words and/or conceptsFootnote 1 between a pronoun and referent. Semantic distance is a key component of the Information Load Hypothesis (ILH; Almor, Reference Almor1999, Reference Almor, Pickering, Clifton and Crocker2000; Almor & Nair, Reference Almor and Nair2007). To accomplish anaphor resolution, anaphors function as retrieval cues for their referents, allowing for the entity to be recalled without explicit restatement (Corbett & Chang, Reference Corbett and Chang1983; Dell, McKoon & Ratcliff, Reference Dell, McKoon and Ratcliff1983). The ILH attributes referential processing difficulty with pronouns to an effect of working memory. The more distance there is between the pronoun and referent, the more difficult it is to maintain the referent in working memory and link it to the pronoun once encountered. In Example (1), linking the pronouns he and his to the referent Karim would be easier in (2) than in the case of either (3) or especially (4). According to the ILH, this increased difficulty is due to the increased number of words and concepts that intervene between the pronoun and referent. However, as in the case of (4), the increased semantic distance may be great enough such that the antecedent referent is no longer active in the discourse representation and must be reestablished to be able to appropriately link the pronoun and referent.

The ILH was originally developed based on English and has been studied in a number of spoken languages (Carminati, Reference Carminati2005; de Carvalho Maia, Vernice, Gelormini-Lezama, Lima & Almor, Reference de Carvalho Maia, Vernice, Gelormini-Lezama, Lima and Almor2017; Gelormini-Lezama & Almor, Reference Gelormini-Lezama and Almor2011, Reference Gelormini-Lezama and Almor2014), including Chinese, where pronouns can be dropped (pro-drop; Yang, Gordon, Hendrick & Wu, Reference Yang, Gordon, Hendrick and Wu1999; Yang, Gordon, Hendrick & Hue, Reference Yang, Gordon, Hendrick and Hue2003). These studies have shown that manipulating the number of words between anaphor and antecedent disrupts processing in a variety of typologically different spoken languages. However, it is unknown whether this ILH account of referential processing applies to bilinguals with experience in signed languages, such as ASL. The referential system within ASL differs greatly from those seen in spoken languages, with even the presence of pronouns within the language being a matter of debate (Friedman, Reference Friedman1975; Liddell, Reference Liddell2013; Liddell & Metzger, Reference Liddell and Metzger1998). Here we will test the hypothesis that the system of pronominal reference in ASL may influence referential processing during reading of English (L2) texts in deaf bimodal bilinguals. Specifically, we predict that characteristics of the referential system will reduce the need to engage verbal working memory to maintain or reactivate antecedent referents when readers encounter anaphors.

Signers can establish referents using specific locations within the immediate physical space surrounding them (referential loci, Pfau, Salzmann & Steinbach, Reference Pfau, Salzmann and Steinbach2018), illustrated in Figure 1. Reestablishment of the antecedent with anaphors occurs when the signer points to, i.e., grammatically utilizes, one of the previously established loci. Certain spaces, such as those used to indicate what would be the first and second person equivalents in spoken language, are fixed when interlocutors are physically present, and therefore do not need to be re-established. The signer points to themselves to indicate first person (Figure 1.1) and refers to the physical space of the interlocutor to indicate second person (Figure 1.2). To establish additional referents, the signer can utilize locations to their left or right (Figure 1.3), by pointing to the space and specifying the entity being referred to (or by signing the reference or object at that spatial location). For example, a signer might point to their immediate left (Figure 1.3b) and sign Karim. This space could then be used throughout the discourse to refer back to the referent, Karim. This space often remains associated with the previously introduced referent throughout the discourse without needing explicit reestablishment as in many spoken languages (e.g., with a noun phrase or name), unless otherwise specified (Frederiksen & Mayberry, Reference Frederiksen and Mayberry2016). Bimodal bilinguals may be able to transfer these characteristics from their signed language to referential processing when reading English texts, which could reduce the need to engage verbal working memory

Fig. 1. Example of referential loci used for referencing in American Sign Language

ASL also differs from most spoken languages by using first person indexical shifts, which may influence the processing of pronouns. Indexical reference allows the speaker to embody the third person referent while using the first-person, a phenomenon rarely seen in spoken languages. However, Amharic (a language used in Ethiopia) has been shown to have this type of shift and can be used to illustrate it in spoken language (Schlenker, Reference Schlenker1999, Reference Schlenker2003). In this language, when I is embedded in the appropriate clause, it indicates to the referred speaker rather than the actual speaker. Signers most often use this form of reference to pronouns or noun phrases for re-establishment (Frederiksen & Mayberry, Reference Frederiksen and Mayberry2016). However, research on pronominal reference in ASL production is highly limited and these results may not be universal.

Indexical reference also allows for conflation of second- and third-person pronouns in ASL by using role shift (Lillo-Martin, Reference Lillo-Martin2013). While discourse often takes place in the neutral space directly in front of the signer, this may change to indicate actions by other entities within the discourse (Figure 2). The use of role shift allows for signers to track referents within the discourse without other forms of re-establishment. The combination of role shift and first-person form of the verb can create visual agreement in the grammatical structure (but see Lillo-Martin & Meier, Reference Lillo-Martin and Meier2011). As with spoken Amharic, role shift in sign language could be considered similar to direct quotation in English texts (Quer, Reference Quer2005) and can be used as such, though it is not always equivalent (Zucchi, Reference Zucchi2004). Role shift and visual agreement are present and well established in a variety of different sign languages (Zucchi, Reference Zucchi2004). However, this use of role shift can conflate the second and third person, making them less distinguishable and more ambiguous than in their spoken counterparts (Meier, Reference Meier1990; Quinto-Pozos, Muroski & Saunders, Reference Quinto-Pozos, Muroski and Saunders2019).

Fig. 2. Example of indexical reference, or role shift, in American Sign Language. This is a common form of reference used within natural ASL discourse.

It is currently unknown if the use of indexical space and other devices of reference observed in ASL influences referential processing during reading of English texts. While we indicated earlier that the form and frequency of referential processing in ASL may reduce the need to engage verbal working memory, other differences between ASL and English may not transfer well when ASL–English bilinguals read English texts. As reviewed above, factors such as translational equivalence of the pronoun, as well as ambiguity of the referent may be relevant. Previous work has shown that, particularly when reading multiple texts in L2, readers rely on linguistic knowledge from their L1 to assist with comprehension (Karimi, Reference Karimi2015). The potential conflation of second- and third-person in ASL may make processing of distinct grammatical persons in English difficult for ASL–English readers due to language interference (Kroll, Dussias, Bice & Perrotti, Reference Kroll, Dussias, Bice and Perrotti2015). Finally, ASL reference is not gender marked. Work with unimodal, hearing bilinguals has shown that processing of gender in L2 pronouns is less automatic when grammatical gender is not present in L1 (Dussias et al., Reference Dussias, Valdés Kroff, Guzzardo Tamargo and Gerfen2013; Lew-Williams & Fernald, Reference Lew-Williams and Fernald2007). Similarly, it has been shown that learners of a language with case marking requirements that differ from their dominant language also show interference (Austin, Reference Austin2007; Montrul, Bhatt & Bhatia, Reference Montrul, Bhatt and Bhatia2012). But it is currently not known if interference effects will occur as a function of the presence or absence of grammatical person.

As discussed previously, there is less frequent explicit reestablishment of referents within ASL as compared to English (Frederiksen & Mayberry, Reference Frederiksen and Mayberry2016). This may reduce the need for verbal working memory maintenance or reactivation of pronouns in signers. The preference for zero anaphora is similar to pro-drop languages. Pro-drop languages, such as Spanish and Chinese, allow speakers to exclude overt subjects and invert subject-verb order. They often use conjugation or discourse topic to specify the subject. In contrast, languages like English require overt subjects and subject-verb ordering. Bilinguals whose L1 is pro-drop often show these types of pro-drop errors in their non-pro-drop L2, such as verb-subject ordering or subject exclusion (White, Reference White1985). The opposite has been seen as well for non-pro-drop L1 and pro-drop L2 (Montrul & Rodriguez-Louro, Reference Montrul and Rodriguez-Louro2006; Serratrice et al., Reference Serratrice, Sorace and Paoli2004). These pro-drop characteristics may give L1 signers an advantage when reading texts by reducing the need to engage verbal working memory to maintain or reactive the referent. In English, the discourse will explicitly reestablish the referent, through pronouns or noun phrases, when needed for the comprehender to correctly attribute the anaphor (Gordon et al., Reference Gordon, Grosz and Gilliom1993; Gordon & Hendrick, Reference Gordon and Hendrick1998). According to the ILH, reestablishment would come when the distance is too great for working memory to maintain the referent (Almor, Reference Almor1999, Reference Almor, Pickering, Clifton and Crocker2000). However, given that signers utilize explicit reestablishment less frequently (Frederiksen & Mayberry, Reference Frederiksen and Mayberry2016), they may be less affected by distance during anaphor resolution. In contrast, the use of pronouns is more frequent in English than in ASL (Frederiksen & Mayberry, Reference Frederiksen and Mayberry2016). Using pronouns at a frequency felicitous in English would be a pragmatic violation in ASL (similar to what is found in de Carvalho Maia et al., Reference de Carvalho Maia, Vernice, Gelormini-Lezama, Lima and Almor2017; Gelormini-Lezama & Almor, Reference Gelormini-Lezama and Almor2014). This could result in increased reading times for ASL–English bilinguals.

More general factors also influence reading fluency in deaf readers. Historically, literacy is often poor among deaf individuals (Traxler, Reference Traxler2000). Learning to read, particularly for alphabetic languages like English, often relies on letter-phoneme mapping of speech sounds onto graphemes (Ehri, Reference Ehri2005), which is inaccessible to deaf readers. However, creating this letter-phoneme mapping may not be the only factor in reading alphabetic languages (Cates, Traxler & Corina, Reference Cates, Traxler and Corina2022). Specifically, highly skilled deaf readers can see a word and access its meaning directly, resulting in shorter reading times (Traxler, Banh, Craft, Winsler, Brothers, Hoversten, Piñar & Corina, Reference Traxler, Banh, Craft, Winsler, Brothers, Hoversten, Piñar and Corina2021), by avoiding phonological activation that occurs in hearing readers (Coltheart, Rastle, Perry, Langdon & Ziegler, Reference Coltheart, Rastle, Perry, Langdon and Ziegler2001; Villameriela, Costelloa, Giezena & Carreiras, Reference Villameriel, Costello, Giezen and Carreiras2022). This may make deaf readers more efficient than hearing readers, consistent with the Belanger and Rayner Word Processing Efficiency Hypothesis (Reference Bélanger and Rayner2015).

Previous studies of reading skill that have compared deaf readers to hearing controls may have additional methodological complications. Deaf readers are, largely, bilingual. They often complete reading tasks in English but may have ASL as their first or dominant language. Comparing them to hearing monolinguals operating in their first language may present a number of confounds before differences in hearing status can be assessed, such as bilingual processing effects (Desmet & Duyck, Reference Desmet and Duyck2007), language dominance (Heredia & Altarriba, Reference Heredia and Altarriba2001), and writing system. Previous research has shown that using Chinese–English, hearing bilinguals may be a better control group for the assessment of ASL–English, deaf bilinguals’ reading ability due to their similarities in letter-phoneme mapping for English (Cates et al., Reference Cates, Traxler and Corina2022). Chinese languages utilize a logographic system which, in contrast to the English alphabetic system, does not have a tight grapheme-phoneme relationship, making it better suited for comparisons between deaf and hearing readers (Yan, Pan, Bélanger & Shu, Reference Yan, Pan, Bélanger and Shu2015). Indeed, other studies comparing these two groups have shown similar levels of reading comprehension in English, while ASL bilinguals maintained many of the characteristic reading patterns seen in deaf readers (Traxler et al., Reference Traxler, Banh, Craft, Winsler, Brothers, Hoversten, Piñar and Corina2021). Comparisons between ASL–English, deaf bilinguals and Chinese–English, hearing bilinguals eliminate the bilingual processing, language dominance, and letter-phoneme mapping differences that may have influenced previous studies and isolate the effects of ASL as a first language in the present study.

In addition to the cognitive and perceptual characteristics shared by ASL–English and Chinese–English bilinguals, Chinese as a language has a number of important features that may affect the comparison to English and ASL. There are few form differences between pronouns for English and modern Chinese. Chinese has distinct first, second, and third person pronouns, with differences from English only present in the gender marking. In the spoken form, is traditionally he but it is used for all third person singular referents. When written, there are gendered radicals used within the character depending on the gender of the referent. However, because gender marking of pronouns is mixed in Chinese and non-existent in ASL, this characteristic was not investigated in the present study.

The present study investigates the influence of L1 referential systems on the processing of pronouns in L2 text. In particular, we are interested in the L1 system of reference used in ASL, given the differences in grammatical person, as well as the form and frequency of reference. In our initial confirmatory analysis, we investigate how grammatical person in pronouns of L1 influence anaphor resolution of pronouns in L2. We predict that if anaphor resolution in L2 is influenced by the degree of similarity to grammatical person in L1, then deaf readers with L1 ASL will show differences in reading time for English (L2) second- and third-person pronouns while hearing Chinese–English bilingual readers will not. In our additional exploratory analysis, we investigate whether the system and frequency of reference in ASL may influence anaphor resolution in English. In particular, we investigate whether L1 experience with extended retention may show differences in anaphor resolution not fully accounted for by ILH. We predict that if the effect of semantic distance on anaphor resolution in L2 is influenced by L1 processing strategies, then deaf readers with L1 ASL will show different patterns of influence for factors associated with reading time for English (L2) pronouns as compared to hearing Chinese–English bilingual readers.

Methods

Participants

The study included 93 deaf participants (mean age = 23.71, range 18–43, gender: 62W/30M/1NA) and 49 hearing participants (mean age = 22.12, range 18–35, gender: 30W/18M/1NA). Deaf participants were ASL–English bilingual and were recruited via advertisement and word-of-mouth at three universities in the United States: California State University (Northridge), Gallaudet University, and the Rochester Institute of Technology. All of the deaf participants met the Americans with Disability Act definition of deafness and were diagnosed as deaf before the age of three. All but two had self-reported hearing loss of 75 Db or greater in their better ear. Of the deaf participants, 13 had cochlear implants and 47 had hearing aids. Deaf participants had an average age of first language exposure of 6.81 years (range 0–23 years) for American Sign Language. This is a very wide range. Age of acquisition of ASL and L1 language deprivation have previously been shown to influence reading in deaf readers (Cates et al., Reference Cates, Traxler and Corina2022; Traxler et al., Reference Traxler, Banh, Craft, Winsler, Brothers, Hoversten, Piñar and Corina2021). In addition, the order in which languages are learned also has an effect on deaf readers (Cormier, Schembri, Vinson & Orfanidou, Reference Cormier, Schembri, Vinson and Orfanidou2012). However, our analyses did not show an effect of age of acquisition or language preference (ASL versus English). It is only because of this reason that the deaf signers were left as one large, heterogeneous group with ASL being referred to as their L1. Age of English exposure was not recorded for deaf participants, but it often occurs during their first year of formal education (~ 6 years). Self-report for race identified 13 participants as Asian and 53 participants as White. Deaf participants had an average of 15.52 (SD = 2.60) years of education.

Hearing participants were Chinese–English bilinguals recruited from the University of California, Davis undergraduate student body, with an average age of first language exposure of 1.02 years (SD = 1.28, range 0–6 years) for Chinese. Average age of English exposure for hearing participants was 7.81 years (SD = 3.77, range 1–16). Self-report for race identified 47 participants as Asian, with the remaining two not responding. Hearing participants had an average of 15.47 (SD = 2.30) years of education. All subjects provided informed consent before participating.

Groups did not statistically differ in socio-economic status (deaf: M = $23,478, SD = $10,823; hearing: M = $21,395, SD = $6755; p = .21), reading comprehension (deaf: M = 7.71, SD = 2.47; hearing: M = 7.54, SD = 1.71; p = .43), or phonological awareness (deaf: M = 52.83, SD = 12.89; hearing: M = 50.81, SD = 13.90; p = .92). However, groups did significantly differ in their English word fluency (p < .001), with ASL–English deaf readers (M = 2.35, SD = 1.28) having significantly lower scores than Chinese–English hearing readers (M = 5.40, SD = 1.11). However, this is in line with previous findings for verbal fluency in deaf readers and likely due to the nature of the task (Witkin, Reference Witkin2014). See Table 1 for full details.

Table 1. Participant characteristics and comparison*

* values are means(standard deviation), while italicized values are counts

** score out of 10

*** out of 80, monolingual mean 59.08(13.97)

**** out of 75, monolingual mean 51.60(16.75)

***** compares ASL and English fluency

Stimuli

The following five stories were used in the experiment: “I am Bigfoot” by Ron Carlson, “Cell Phones or Pheromones? New Props for the Mating Game” by Natalie Angier, “Four Score and Seven Lattes Ago: How Coffee Shortage Killed the Confederacy” by David A. Norris, “The Secret Life of Walter Mitty” by James Thurbur, and “The Oval Portrait” by Edgar Allan Poe. Stories varied in length (M = 1204 words, range 694–2017 words), type (3 fiction, 2 nonfiction), and narrator perspective (1 first person, 2 second person, 1 third person limited, 1 third person omniscient). The pronouns present within the five stories varied in grammatical person, grammatical type, number, and gender (Table 2). Surface distance was calculated as the number of intervening words between the pronoun and the last mention of the referent (noun phrase or pronoun, M = 13.98, SD = 45.80). Conceptual distance was calculated as the number of non-repeating intervening noun phrases between the pronoun and the last mention of the referent (noun phrase or pronoun, M = 3.77, SD = 13.44).

Table 2. Frequency of pronouns within stimuli

Procedure

Participants attended two 2.5-hour sessions no more than 1 week apart. All stories were presented in the second session, in the same order. Following informed consent and the completion of the background survey, participants took the tests as described above. Testing sessions had as many as four participants at a time and began with the timed measures and ended with the self-paced measures. Experimenters explained task instructions in English or ASL as appropriate to each participant's communication needs. If more than one language modality was needed at a time, instructions were issued in each modality in turn. During the self-paced portion of the testing, experimenters were able to address each individual in whichever language modality was required. At least one researcher fluent in ASL was present for the duration of testing with deaf individuals.

Participants read an average of 4.39 stories (range 1–5, due to time constraints in testing). Stories were presented using EPrime software on laptop computer via a moving window paradigm. Upon the start of each story, a screen with lines and punctuation appeared. Participants read each story one word at a time by pressing the space bar. With each bar press, the current word disappeared and the next word appeared. Following each story, participants answered a series of questions indicating how much they recalled of the story and then answered a set of ten multiple-choice comprehension questions for each story (Freed, Hamilton & Long, Reference Freed, Hamilton and Long2017). The correct answers were totaled and the scores for each participant were averaged to get one reading comprehension score.

Participants were given a large battery of tests, including measures of vocabulary comprehension (Nelson-Denny; Brown, Reference Brown1960), verbal working memory (R Span; Just & Carpenter, Reference Just and Carpenter1980; Unsworth, Heitz, Schrock & Engle, Reference Unsworth, Heitz, Schrock and Engle2005), and verbal fluency (Ekstrom & Harman, Reference Ekstrom and Harman1976). See Cates and colleagues (Reference Cates, Traxler and Corina2022) for an exposition of how these factors affected reading comprehension across participants.

The Nelson-Denny reading test (Brown, Reference Brown1960; Brown) consists of two parts: vocabulary and reading comprehension. For vocabulary, participants complete multiple choice questions where they select the meaning of a target word. In the reading comprehension portion, they read five passages and again answer multiple choice questions about the passages read. In the RSpan test of verbal working memory (Just & Carpenter, Reference Just and Carpenter1980; Unsworth et al., Reference Unsworth, Heitz, Schrock and Engle2005), participants read sentences while trying to retain a set of unrelated letters. Participants were shown sensical or nonsensical sentences, which they had to judge as such, before being shown a letter. They were asked to say the letter out loud before progressing to the next sentence-letter combination. There were 3–7 letter-sentence combinations in a set, for a total of 75. At the end of each set, they were asked to recall the letters in order. The total number of letters they were able to correctly recall was used as their verbal working memory score. Fluency scores were based on the number of words the participant produced in one minute (either through speech or fingerspelling) which began with a specified letter or handshape.

Visual inspection of the data showed a positive skew for the reading time values. In order to obtain a more normal distribution for the outcome variable, reading times were reciprocal transformed (1/RT; Manikandan, Reference Manikandan2010).

Results

Confirmatory analysis

A repeated measures 2 × 3 ANOVA (aov in R) was used, comparing reciprocal transformed reading times for deaf readers to their hearing counterparts for first, second, and third person pronouns. This analysis showed that both the main effect of group (F(1, 57279) = 2550.72, p < .001) and grammatical person (F(2, 57279) = 160.94, p < .001) were significant, with deaf readers (M = 354.81, SD = 447.03) showing significantly shorter reading times for pronouns than their hearing counterparts (M = 408.12, SD = 671.11; Figure 3). However, there was also a significant interaction (F(2, 57279) = 46.49, p < .001). Pairwise t tests (Bonferroni corrected) showed that deaf readers were slower for reading first person pronouns (M = 373.23, SD = 313.67) as compared to second (M = 356.50, SD = 292.36; p < .001, d = 0.06) and third person (M = 344.26, SD = 524.82; p < .001, d = .07; Figure 4). Additionally, second and third person pronouns were shown to differ from one another significantly (p < .001), though this difference was smaller than that seen for second and third compared to first, with a smaller effect size (d = 0.03). Hearing readers showed no significant differences in reading time for grammatical person (p  >  .05, d < .03). Deaf readers were significantly faster in reading all pronouns as compared to hearing readers (p < .001). Additional analysis was done for deaf readers using age of acquisition of ASL as a predictor, but was not significant (p  >  .05).

Fig. 3. Comparison of reading times for pronouns in deaf and hearing readers during self-paced reading. Deaf readers (red) had significantly shorter overall reading times for pronouns (shown on the y-axis) as compared to hearing readers (blue).

Fig. 4. Comparison of reading times for pronouns with differing grammatical person during self-paced reading. Deaf readers (red) had significantly longer reading times (shown on the y-axis) for first person as compared to second and third person (shown on the x-axis). Deaf readers had significantly shorter reading times than hearing readers (blue) for all pronoun types.

Additional analysis comparing reading times for words overall showed the same pattern, with deaf readers having shorter RTs (M = 375.92, SD = 1536.57, p < .001) than their hearing counterparts (M = 446.69, SD = 514.82). All outliers were retained in this analysis.

Exploratory analysis

Additional analysis was done using mixed effect models, with maximum likelihood estimation carried out by lmer and lme4 packages for R, to predict pronoun reading times as a function of their surface distance, conceptual distance, group (deaf vs hearing), R Span score, and Nelson-Denny score. Additional models including age of acquisition of ASL and language preference as predictors were also created. However, none of these models showed age of acquisition or preference to be significant and were therefore excluded. The data had a complex nested structure: the unit of analysis was the item response, with items nested within pronouns, pronouns nested within paragraphs, and paragraphs nested within texts. At the text level, responses were cross-classified by participant. To test the need of this nested data structure, the fit of this model (M1) was compared to a simplified model (M0) that included only a random subject effect to account for the nesting of scores within subject. A likelihood ratio test (LRT) supported the more complex nested data structure (χ2(3) = 9253.9, p < .001). This was also supported by reductions in the AIC (M1 = -655954, M0 = -646706) and BIC (M1 = -655900, M0 = -646789) values.

Next, five different models were constructed with increasing complexity of interactions between predictors (Table 3). All predictors at the item level were grand-mean centered. In all models, subscript i indicates the item, the subscript j indicates participant number, and subscript k indicates the item nesting structure (additional nesting was excluded for clarity). The first, Model 1, was a base model that included all predictors without any interactions between them:

(1)$$\eqalign{& \displaystyle{1 \over {reading\,time_{ijk}}} = \beta _{0jk} + \beta _1group_j + \beta _2R{\rm \;}\,Span_j + \beta _3Nelson{\rm \;}\,Denny_j \cr & \quad + {\rm \;}\beta _4surface{\rm \;}\,distance_k + {\rm \;}\beta _5conceptual\,{\rm \;}distance_k + {\rm \;}e_{ijk}} $$

Table 3. Model comparisons

*Values for Models 4 and 5 were excluded due to insignificance in LRT

Model 2 included group as a predictor as well as interactions between surface distance and group and conceptual distance and group:

(2)$$\eqalign{& \displaystyle{1 \over {reading{\rm \;}\,time_{ijk}}} = {\rm \;}\beta _{0jk} + \beta _1group_j + \beta _2R{\rm \;}\,Span_j \cr & \quad+ \beta _3Nelson{\rm \;}\,Denny_j + {\rm \;}\beta _4surface{\rm \;}\,distance_k \cr & \quad+ {\rm \;}\beta _5conceptual{\rm \;}\,distance_k \cr & \quad + {\rm \;}\beta _6( {surface\,{\rm \;}distance_k\,{\rm \;}x{\rm \;}\,group_j} ) \cr & \quad+ {\rm \;}\beta _7( {conceptual\,{\rm \;}distance_k\,{\rm \;}x\,{\rm \;}group_j} ) + {\rm \;}e_{ijk}} $$

A LRT showed Model 2 to be a significant improvement on the base model (χ2(2) = 48.43, p < .001). Model 3 included the same parameters as Model 2, with the addition of interactions between surface distance and R Span score, conceptual distance and R Span score, surface distance and Nelson-Denny score, and conceptual distance and Nelson-Denny score. R Span score and Nelson-Denny score were not included as independent predictors in Model 3:

(3)$$\eqalign{& \displaystyle{1 \over {reading{\rm \;}\,time_{ijk}}} = {\rm \;}\beta _{0jk} + \beta _1group_j + {\rm \;}\beta _2surface\,{\rm \;}distance_k \cr & \quad+ {\rm \;}\beta _3conceptual{\rm \;}\,distance_k + {\rm \;}\beta _4( {surface\,{\rm \;}distance_k{\rm \;}\,x{\rm \;}\,group_j} ) \cr & \quad + {\rm \;}\beta _5( {conceptual\,{\rm \;}distance_k\,{\rm \;}x{\rm \;}\,group_j} ) \cr & \quad + {\rm \;}\beta _6( {surface{\rm \;}\,distance_k\,{\rm \;}x{\rm \;}\,R{\rm \;}\,Span_j} ) \cr & \quad+ {\rm \;}\beta _7( {conceptual{\rm \;}\,distance_k\,{\rm \;}x{\rm \;}\,R{\rm \;}\,Span_j} ) \cr & \quad+ \beta _8( {surface{\rm \;}\,distance_k\,{\rm \;}x\,{\rm \;}Nelson{\rm \;}\,Denny_j} ) \cr & \quad + {\rm \;}\beta _9( {conceptual\,{\rm \;}distance_k{\rm \;}\,x{\rm \;}\,Nelson{\rm \;}\,Denny_j} ) + {\rm \;}e_{ijk}} $$

A LRT showed that Model 3 was a significant improvement on Model 2 (χ2(2) = 13.73, p = .001).

The final two models were Model 4 and Model 5. Model 4 included all of the predictors in Model 3, with the addition of R Span score and Nelson-Denny score as independent predictors. Model 5 included all of the predictors as previous models (surface distance, conceptual distance, group, R Span score, Nelson-Denny score) and allowed all of the subject-level variables to interact with each other and each measure of distance. Measures of distance were not allowed to interact. LRT for Models 4 (χ2(2) = 2.39, p = .30) and 5 (χ2(14) = 17.116, p = .25) showed that there was no significant improvement over Model 3. All LRT was supported by AIC and BIC comparisons, wherein Model 3 showed lower AIC and BIC than all other models, except Model 2. BIC penalized the additional complexity in Model 3 as compared to Model 2 – however, the additional complexity more closely aligned with concepts of interest and the model was retained.

Follow-up analysis of the interactions between predictors and group was conducted. Model 3 was applied to subset data of either deaf or hearing readers, with the grouping predictor excluded. Results for the deaf readers showed that the only significant predictors in the model were the interactions between Nelson-Denny and surface distance and Nelson-Denny and conceptual distance (Table 4). Hearing readers showed significance for the opposite set of predictors (Table 5)

Table 4. Predictor significance for model in deaf readers

Table 5. Predictor significance for model in hearing readers

Discussion

Confirmatory analysis

Our confirmatory results showed that deaf participants had shorter reading times for pronouns than their hearing counterparts (Figure 3). According to the word processing efficiency hypothesis (Bélanger & Rayner, Reference Bélanger and Rayner2015), skilled deaf readers do not have to activate phonological representations of written words to access semantic information, as hearing readers would. By bypassing phonological representations, deaf readers are able to more efficiently process written words. This increased efficiency leads to shorter RTs (Bélanger, Schotter & Rayner, Reference Bélanger, Schotter and Rayner2014) – as was seen in our own results. Our findings showed that this efficiency extends to reading pronouns, in addition to content words. Our results also support previous studies comparing deaf readers (Chinese Sign Language-Chinese bilinguals) to hearing readers with a Chinese L1 (Yan et al., Reference Yan, Pan, Bélanger and Shu2015), where deaf readers were shown to have shorter RTs.

Additionally, our results showed that reading times for second and third person pronouns differed from first person pronouns for deaf readers, but not for hearing readers (Figure 4). These findings supported our hypothesis that deaf readers would process English pronouns differently depending on their similarity to ASL. L1 influences reading in L2, as readers depend on language similarities for processing texts (Karimi, Reference Karimi2015; Upton & Lee-Thompson, Reference Upton and Lee-Thompson2001; Wolbers et al., Reference Wolbers, Bowers, Dostal and Graham2014). In ASL, there can be a conflation of second and third person pronouns (Meier, Reference Meier1990; Quinto-Pozos et al., Reference Quinto-Pozos, Muroski and Saunders2019); in English and Chinese, second and third person pronouns are distinct. For deaf readers, the first-person pronouns were likely processed differently than second and third person due to the similarity between first person reference in both ASL (L1) and English (L2; Jarvis & Pavlenko, Reference Jarvis and Pavlenko2008), which is not present for second and third person.

Our results showed that deaf readers had shorter RTs for second and third person pronouns as compared to first-person. This result is in contrast to previous literature which finds facilitation for items with more closely shared meaning (Costa, Caramazza & Sebastian-Galles, Reference Costa, Caramazza and Sebastian-Galles2000) or grammatical structure (Runnqvist, Gollan, Costa & Ferreira, Reference Runnqvist, Gollan, Costa and Ferreira2013). However, longer RTs for the more closely shared first-person pronouns may be due to interference from L1 and the associated increase in processing difficulty (Kroll, Bobb, Misra & Guo, Reference Kroll, Bobb, Misra and Guo2008; Kroll et al., Reference Kroll, Dussias, Bice and Perrotti2015).

In contrast to deaf readers, hearing Chinese–English bilingual readers did not show significant differences in reading time for grammatical person. Unlike ASL, Chinese does have distinct first, second, and third person pronouns. Due to the commonalities with their L1, Chinese–English bilinguals processed all three forms of English pronouns in a similar manner. However, there has been no previous work published on monolinguals or Chinese–English bilinguals to show that pronouns are processed similarly fast, regardless of grammatical person. With that in mind, our results still show group-wise differences for the reading times of first- versus second- and third-person pronouns between Chinese–English and ASL–English bilinguals that have not previously been reported.

Exploratory analysis

Our analysis showed that both surface distance and conceptual distance were significant predictors for hearing readers (Table 5). ILH posits that working memory constraints increase processing effort for words with a larger semantic distance (Almor, Reference Almor1999, Reference Almor, Pickering, Clifton and Crocker2000; Almor & Nair, Reference Almor and Nair2007), as increased distance requires readers to maintain representations in their working memory for longer. Previous findings from Chinese readers support ILH-based referential processing (Yang et al., Reference Yang, Gordon, Hendrick and Wu1999) – however, these readers were not tested for the effects of semantic distance. Our results for hearing readers aligned with ILH, showing that the interaction between working memory (R Span score) and distance was significant for typically hearing Chinese–English bilingual readers. The interaction of distance with English vocabulary knowledge was not significant.

In contrast to hearing readers, our results showed that referential processing in deaf readers was influenced by factors not fully accounted for in previous studies. While distance and its interaction with working memory were significant predictors of pronoun RT for hearing readers, these factors were not shown to influence RT for deaf readers. Instead, our results showed that the interaction of distance with English vocabulary knowledge (Nelson-Denny score) was the only significant predictor of pronoun RT for deaf readers (Table 4). These results do not align with ILH, which attributes distance effects on referential processing to working memory (Almor, Reference Almor1999, Reference Almor, Pickering, Clifton and Crocker2000; Almor & Nair, Reference Almor and Nair2007). We do need to take into account here that Chinese is a pro-drop language with some similar characteristics to ASL (Koulidobrova, Reference Koulidobrova2009; Wulf, Dudis, Bayley & Lucas, Reference Wulf, Dudis, Bayley and Lucas2002), though speakers of Chinese produce a much lower proportion of null anaphora than seen in written language (Wang, Lillo-Martin, Best & Levitt, Reference Wang, Lillo-Martin, Best and Levitt1992). Given this similarity in L1, the effects we see in ASL–English bilinguals may be due to a utilization of spatial, rather than verbal, working memory (Hirshorn, Fernandez & Bavelier, Reference Hirshorn, Fernandez and Bavelier2012). Regardless, ASL–English bilinguals show more divergent patterns than were previously observed within referential processing research in spoken languages, and their pronoun processing is influenced by vocabulary knowledge and not by verbal working memory. Additionally, these effects were not influenced by age of acquisition of ASL (Cates et al., Reference Cates, Traxler and Corina2022; Traxler et al., Reference Traxler, Banh, Craft, Winsler, Brothers, Hoversten, Piñar and Corina2021) or language preference (Cormier et al., Reference Cormier, Schembri, Vinson and Orfanidou2012).

Pronoun reading speed in deaf readers was predicted by interactions with English vocabulary knowledge, rather than working memory. Previous studies of English monolingual readers have shown that working memory predicts RT for pronouns, but vocabulary knowledge predicts reading comprehension more broadly (Freed et al., Reference Freed, Hamilton and Long2017) as well as complex syntactic integration (Kukona, Gaziano, Bisson & Jordan, Reference Kukona, Gaziano, Bisson and Jordan2021). For deaf readers, vocabulary knowledge has also previously been shown to influence reading comprehension (Sehyr & Emmorey, Reference Sehyr and Emmorey2022; Cates et al., Reference Cates, Traxler and Corina2022). At increased semantic distances, deaf readers with a larger English vocabulary – and by extension better reading comprehension – may more efficiently process the lexical representation of the referent (Taylor & Perfetti, Reference Perfetti2017), and subsequently link the pronoun to the referent. This finding relates to the Lexical Quality Hypothesis (Perfetti, Reference Perfetti2017), wherein those with higher reading ability were able to use textual information more efficiently. In contrast, hearing readers rely more on working memory – rather than reading comprehension or textual information – to link pronouns to referents during story comprehension.

Both deaf ASL–English and hearing Chinese–English bilinguals showed opposite effects of surface distance and conceptual distance. Whereas surface distance is a mere reflection of the number of words between the antecedent and pronoun, conceptual distance is indicative of representational richness, as it is counted as the number of novel intervening noun phrases between the pronoun and the last mention of the referent. Our results showed that increased surface distance increased reading time, while increased conceptual distance decreased reading time for both groups (Table 3 and 5). Increased surface distance made anaphor resolution more difficult, as is expected by ILH (Almor, Reference Almor1999, Reference Almor, Pickering, Clifton and Crocker2000; Almor & Nair, Reference Almor and Nair2007). But because conceptual distance is indicative of representational richness, this likely facilitated pronoun processing. Evidence of this idea comes from a study that compared representationally rich vs. poor contexts, and showed that pronouns with equally probable potential referents are easier to process in representationally rich contexts (Karimi, Swaab & Ferreira, Reference Karimi, Swaab and Ferreira2018).

The unexpected directionality of the effects of conceptual distance may also be due to collinearity of the two distance predictors. Overall, collinearity does not affect the goodness of fit of the models predicted (Neter, Kutner, Nachtsheim & Wasserman, Reference Neter, Kutner, Nachtsheim and Wasserman1996). However, high collinearity between two factors can reduce the reliability of the model to produce accurate coefficients (Midi, Sarkar & Rana, Reference Midi, Sarkar and Rana2010) which may have resulted in the unexpected effect of conceptual distance seen in our models. Additionally, high collinearity can influence p-values (Midi et al., Reference Midi, Sarkar and Rana2010) but, importantly, this is only for the factors which share collinearity. Crucially, the groupwise comparisons shown in this analysis were therefore unaffectedFootnote 2.

Overall, the results show that there may be additional influence of L1 on referential processing that is not fully accounted for by the ILH, particularly for readers whose L1 utilizes a very distinct system of reference. Though ILH in bilinguals has been supported by studies in a number of spoken languages (Carminati, Reference Carminati2005; de Carvalho Maia et al., Reference de Carvalho Maia, Vernice, Gelormini-Lezama, Lima and Almor2017; Gelormini-Lezama & Almor, Reference Gelormini-Lezama and Almor2011, Reference Gelormini-Lezama and Almor2014) as well as in our own results for hearing readers (Table 5), these languages all have very similar systems of reference. In contrast, ASL uses referential loci to establish and reestablish referents, and while this type of reference is similar to pronouns in English, it is not fully equivalent (Pfau et al., Reference Pfau, Salzmann and Steinbach2018). Additionally, spoken language often relies on pronouns to reestablish referents, while ASL rarely utilized this form in the discourse (Frederiksen & Mayberry, Reference Frederiksen and Mayberry2016). Finding no effect of verbal working memory in ASL–English bilinguals is particularly compelling when comparing them to Chinese–English bilinguals, as Chinese is a pro-drop language (Huang, Reference Huang1984). Our results show that these differences in referential systems influence L2 readers’ processing of pronouns in a way that is not fully accounted for by ILH. As the current study only included one predictor for working memory – namely, verbal working memory, future studies may consider spatial working memory, episodic, and long-term memory as factors for predicting reading time of pronouns for deaf readers (Corina et al., Reference Corina, Farnady, LaMarr, Pedersen, Lawyer, Winsler and Bellugi2020; Hirshorn et al., Reference Hirshorn, Fernandez and Bavelier2012).

Limitations

The present study was conducted solely in L2 for both groups of participants. Studies of referential processing have been conducted in a number of languages, but these studies are often limited to spoken or Latin-based languages. Those studies conducted in non-Latin languages did not investigate effects of semantic distance on referential processing (Yang et al., Reference Yang, Gordon, Hendrick and Wu1999, Reference Yang, Gordon, Hendrick and Hue2003). In the present study, we only looked at L2 processing for ASL–English and Chinese–English bilinguals and did not investigate referential processing in L1 for either group. While these individuals may show similarities or differences in processing of L2, we cannot compare their results or extend our findings to L1. However, that was not the focus of this study. The focus of this study was on the effects of language transfer from L1 to L2 and if this may cause divergence from a current theoretical framework of referential processing. Therefore, analysis of participants’ referential processing in their English L2 was conducted, rather than in L1.

Another potential limitation is that our participants included almost twice as many deaf readers as hearing readers and this may have biased our models towards the structure more fitting for the deaf readers. This may have prevented the more complex models from fitting, as a narrow set of predictors – interactions of distance and vocabulary score – were significant only for deaf readers. However, even when biasing the models toward deaf readers, we were able to observe a number of effects within the hearing reader sample. The unbalanced number of participants was originally meant to allow for manipulation of age of acquisition of L1 for deaf readers, which has previously been shown to influence L2 processing (Cates et al., Reference Cates, Traxler and Corina2022). An additional analysis was conducted with age of acquisition as a predictor, but it was not significant.

Given that the two groups in this study also differed in hearing status, as well as language, it is possible that these effects are due to deprivation of auditory information. However, given that the two groups have previously been shown to have similar phonological awareness (Cates et al., Reference Cates, Traxler and Corina2022), this seems unlikely. Additionally, primary analysis was limited to pronouns and, as a result, phonology should play very little role in processing outside of general reading speed (Bélanger & Rayner, Reference Bélanger and Rayner2015). Future studies may benefit from utilizing highly proficient hearing signers to determine if these effects are purely based on experience with ASL.

Conclusion

As a whole, the results of the present study show the need to expand current theories of referential processing, specifically to include possible influences of more diverse types of language transfer. Language transfer influenced processing of first versus second and third person pronouns in L2, depending on their similarity to concepts in L1. While previous work has successfully applied ILH to a number of languages in monolinguals and bilinguals, the languages tested were primarily Latin-based. Our findings expand ILH working memory effects of semantic distance to referential processing in Chinese–English bilinguals; it does not expand them to deaf, ASL–English bilinguals.

These findings also illustrate the value of studying diverse languages, not just for their ability to contrast more well-studied languages, but for the diversity itself. We approached this study with a benefit – rather than deficit – mindset. Deaf readers have long been studied for their deficits, with efforts primarily focusing on “fixing” them – be that the readers or the deficits. While there are those that have explored the advantages of deaf readers (Bélanger & Rayner, Reference Bélanger and Rayner2015; Bélanger et al., Reference Bélanger, Baum and Mayberry2012; Yan et al., Reference Yan, Pan, Bélanger and Shu2015), these focus largely on how deafness influences operation in spoken language. They do not center on the Deaf community or the languages which are integral to the community. By centering ASL, we are able to analyze the qualities of signed language itself and begin to properly value it within psycholinguistic research.

Acknowledgements

We would like to acknowledge the participants who worked with us in this study for their contribution, particularly those within the Deaf Community. Additionally, we acknowledge the National Science Foundation Center Funding: SBE 0541953, which provided the funding for this project.

Competing Interests

The authors declare none.

Data Availability Statement

Data and materials that support these findings are available upon request submitted to the authors.

Footnotes

1 Conceptual distance is defined here as the number of non-repeating noun phrases between the pronoun and referent.

2 In an attempt to eliminate collinearity as a factor, we modeled the two distance predictors separately. The models failed to converge however, and we instead retained the combined models.

References

Almor, A (1999) Noun-phrase anaphora and focus: The informational load hypothesis. Psychological Review, 106, 748765.CrossRefGoogle Scholar
Almor, A (2000) Constraints and mechanisms in theories of anaphor processing. In: Architectures and Mechanisms for Language Processing. (ed), Pickering, M, Clifton, C,and Crocker, M. Cambridge University Press. England.Google Scholar
Almor, A and Nair, VA (2007) The form of referential expressions in discourse. Language and Linguistics Compass, 1, 8499.CrossRefGoogle Scholar
Arnold, JE (2010) How speakers refer: The role of accessibility. Language and Linguistics Compass, 4, 187203.CrossRefGoogle Scholar
Austin, J (2007) Grammatical interference and the acquisition of ergative case in bilingual children learning Basque and Spanish. Bilingualism: Language and Cognition, 10, 315331.CrossRefGoogle Scholar
Bélanger, NN, Baum, SR and Mayberry, RI (2012) Reading difficulties in adult deaf readers of French: Phonological codes, not guilty! Scientific studies of Reading, 16, 263285.CrossRefGoogle Scholar
Bélanger, NN and Rayner, K (2015) What eye movements reveal about deaf readers. Current directions in psychological science, 24, 220226.CrossRefGoogle ScholarPubMed
Bélanger, NN, Schotter, E and Rayner, K (2014) Young deaf readers' word processing efficiency. In 55th Meeting of the Psychonomic Society, Long Beach, CA.CrossRefGoogle Scholar
Brown, JI (1960) The Nelson-Denny reading test.Google Scholar
Carminati, MN (2005) Processing reflexes of the Feature Hierarchy (Person> Number> Gender) and implications for linguistic theory. Lingua, 115, 259285.CrossRefGoogle Scholar
Cates, DM, Traxler, MJ and Corina, DP (2022) Predictors of reading comprehension in deaf and hearing bilinguals. Applied Psycholinguistics, 143.Google Scholar
Coltheart, M, Rastle, K, Perry, C, Langdon, R and Ziegler, J (2001) DRC: a dual route cascaded model of visual word recognition and reading aloud. Psychological review, 108, 204.CrossRefGoogle ScholarPubMed
Corbett, AT and Chang, FR (1983) Pronoun disambiguation: Accessing potential antecedents. Memory & Cognition, 11, 283294.CrossRefGoogle ScholarPubMed
Corina, DP, Farnady, L, LaMarr, T, Pedersen, S, Lawyer, L, Winsler, K, … & Bellugi, U (2020) Effects of age on American Sign Language sentence repetition. Psychology and aging, 35, 529.CrossRefGoogle ScholarPubMed
Cormier, K, Schembri, A, Vinson, D and Orfanidou, E (2012) First language acquisition differs from second language acquisition in prelingually deaf signers: Evidence from sensitivity to grammaticality judgement in British Sign Language. Cognition, 124, 5065.CrossRefGoogle ScholarPubMed
Costa, A, Caramazza, A and Sebastian-Galles, N (2000) The cognate facilitation effect: implications for models of lexical access. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 1283.Google ScholarPubMed
de Carvalho Maia, J, Vernice, M, Gelormini-Lezama, C, Lima, MLC and Almor, A (2017) Co-referential processing of pronouns and repeated names in Italian. Journal of psycholinguistic research, 46, 497506.CrossRefGoogle ScholarPubMed
Dell, GS, McKoon, G and Ratcliff, R (1983) The activation of antecedent information during the processing of anaphoric reference in reading. Journal of Verbal Learning and Verbal Behavior, 22, 121132.CrossRefGoogle Scholar
Desmet, T and Duyck, W (2007) Bilingual language processing. Language and linguistics compass, 1, 168194.CrossRefGoogle Scholar
Dussias, PE, Valdés Kroff, JR, Guzzardo Tamargo, RE and Gerfen, C (2013) WHEN GENDER AND LOOKING GO HAND IN HAND: Grammatical Gender Processing In L2 Spanish. Studies in Second Language Acquisition, 35, 353387. https://doi.org/10.1017/S0272263112000915CrossRefGoogle Scholar
Ehri, LC (2005) Learning to read words: Theory, findings, and issues. Scientific Studies of reading, 9, 167188.CrossRefGoogle Scholar
Ekstrom, RB and Harman, HH (1976) Manual for kit of factor-referenced cognitive tests, 1976. Educational testing service.Google Scholar
Emmorey, K, Borinstein, HB, Thompson, R and Gollan, TH (2008) Bimodal bilingualism. Bilingualism: Language and cognition, 11, 4361.CrossRefGoogle ScholarPubMed
Emmorey, K, Li, C, Petrich, J and Gollan, TH (2020) Turning languages on and off: Switching into and out of code-blends reveals the nature of bilingual language control. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46, 443.Google ScholarPubMed
Emmorey, K, Petrich, JA and Gollan, TH (2012) Bilingual processing of ASL–English code- blends: The consequences of accessing two lexical representations simultaneously. Journal of memory and language, 67, 199210.CrossRefGoogle ScholarPubMed
Frederiksen, AT and Mayberry, RI (2016) Who's on First? Investigating the referential hierarchy in simple native ASL narratives. Lingua, 180, 4968.CrossRefGoogle ScholarPubMed
Freed, EM, Hamilton, ST and Long, DL (2017) Comprehension in proficient readers: The nature of individual variation. Journal of Memory and Language, 97, 135153.CrossRefGoogle ScholarPubMed
Friedman, LA (1975) Space, time, and person reference in American Sign Language. Language, 940961.CrossRefGoogle Scholar
Gelormini-Lezama, C and Almor, A (2011) Repeated names, overt pronouns, and null pronouns in Spanish. Language and cognitive processes, 26, 437454.CrossRefGoogle Scholar
Gelormini-Lezama, C and Almor, A (2014) Singular and plural pronominal reference in Spanish. Journal of Psycholinguistic Research, 43, 299313.CrossRefGoogle ScholarPubMed
Gordon, PC, Grosz, BJ and Gilliom, LA (1993) Pronouns, names, and the centering of attention in discourse. Cognitive Science, 17, 311347.CrossRefGoogle Scholar
Gordon, P C and Hendrick, R (1998) The representation and processing of coreference in discourse. Cognitive science, 22, 389424.CrossRefGoogle Scholar
Heredia, RR and Altarriba, J (2001) Bilingual language mixing: Why do bilinguals code- switch?. Current Directions in Psychological Science, 10, 164168.CrossRefGoogle Scholar
Hirshorn, EA, Fernandez, NM and Bavelier, D (2012) Routes to short-term memory indexing: Lessons from deaf native users of American Sign Language. Cognitive Neuropsychology, 29, 85103.CrossRefGoogle ScholarPubMed
Huang, CTJ (1984) On the distribution and reference of empty pronouns. Linguistic inquiry, 531574.Google Scholar
Jarvis, S and Pavlenko, A (2008) Crosslinguistic influence in language and cognition. Routledge.CrossRefGoogle Scholar
Just, MA and Carpenter, PA (1980) A theory of reading: from eye fixations to comprehension. Psychological review, 87, 329.CrossRefGoogle ScholarPubMed
Karimi, H, Swaab, TY and Ferreira, F (2018) Electrophysiological evidence for an independent effect of memory retrieval on referential processing. Journal of Memory and Language, 102, 6882.CrossRefGoogle Scholar
Karimi, MN (2015) L2 multiple-documents comprehension: Exploring the contributions of L1 reading ability and strategic processing. System, 52, 1425.CrossRefGoogle Scholar
Koulidobrova, E (2009) SELF: Intensifier and ‘long distance'effects in ASL. In Proceedings of ESSLLI.Google Scholar
Kroll, JF, Bobb, SC, Misra, M and Guo, T (2008) Language selection in bilingual speech: E Evidence for inhibitory processes. Acta psychologica, 128, 416430.CrossRefGoogle ScholarPubMed
Kroll, JF, Dussias, PE, Bice, K and Perrotti, L (2015) Bilingualism, mind, and brain. Annual review of linguistics, 1, 377.CrossRefGoogle ScholarPubMed
Kubus, O, Villwock, A, Morford, JP and Rathmann, C (2015) Word recognition in deaf readers: Cross-language activation of German Sign Language and German. Applied Psycholinguistics, 36, 831854.CrossRefGoogle Scholar
Kukona, A, Gaziano, O, Bisson, MJ and Jordan, A (2021) Vocabulary knowledge predicts individual differences in the integration of visual and linguistic constraints. Language, Cognition and Neuroscience, 116.Google Scholar
Lee, B, Meade, G, Midgley, KJ, Holcomb, PJ and Emmorey, K (2019) ERP evidence for co- activation of English words during recognition of American Sign Language signs. Brain sciences, 9, 148.CrossRefGoogle ScholarPubMed
Lew-Williams, C and Fernald, A (2007) Young children learning Spanish make rapid use of grammatical gender in spoken word recognition. Psychological Science, 18, 193198.CrossRefGoogle ScholarPubMed
Liddell, SK (2013) Indicating verbs and pronouns: Pointing away from agreement. In The signs of language revisited (pp. 268282). Psychology Press.Google Scholar
Lillo-Martin, D and Meier, RP (2011) On the linguistic status of ‘agreement’ in sign languages. Theoretical Linguistics, 37. https://doi.org/10.1515/thli.2011.009CrossRefGoogle ScholarPubMed
Liddell, SK and Metzger, M (1998) Gesture in sign language discourse. Journal of pragmatics, 30, 657697.CrossRefGoogle Scholar
Lillo-Martin, D (2013) The point of view predicate in American Sign Language. In Language, gesture, and space (pp. 165180). Psychology Press.Google Scholar
Manikandan, S (2010) Data transformation. Journal of Pharmacology and Pharmacotherapeutics, 1, 126.Google ScholarPubMed
Meier, RP (1990) Person deixis in American sign language. Theoretical issues in sign language research, 1, 175190.Google Scholar
Midi, H, Sarkar, SK and Rana, S (2010) Collinearity diagnostics of binary logistic regression model. Journal of interdisciplinary mathematics, 13, 253267.CrossRefGoogle Scholar
Montrul, SA, Bhatt, RM and Bhatia, A (2012) Erosion of case and agreement in Hindi heritage speakers. Linguistic Approaches to Bilingualism, 2, 141176.CrossRefGoogle Scholar
Montrul, S and Rodriguez-Louro, C (2006) Beyond the syntax of the null subject parameter. The acquisition of syntax in Romance languages, 401418.CrossRefGoogle Scholar
Morford, JP, Wilkinson, E, Villwock, A, Piñar, P and Kroll, JF (2011) When deaf signers read English: Do written words activate their sign translations?. Cognition, 118, 286292.CrossRefGoogle ScholarPubMed
Neter, J, Kutner, MH, Nachtsheim, CJ and Wasserman, W (1996) Applied linear statistical models.Google Scholar
Perfetti, CA (2017) Lexical quality revisited. Developmental perspectives in written language and literacy: In honor of Ludo Verhoeven, 5167.Google Scholar
Pfau, R, Salzmann, M and Steinbach, M (2018) The syntax of sign language agreement: Common ingredients, but unusual recipe. Glossa: a journal of general linguistics, 3.Google Scholar
Quer, J (2005, May) Context shift and indexical variables in sign languages. In Semantics and linguistic theory (Vol. 15, pp. 152168).Google Scholar
Quinto-Pozos, D, Muroski, K and Saunders, E (2019) Pronouns in ASL–English simultaneous interpretation. Journal of Interpretation, 27, 3.Google Scholar
Runnqvist, E, Gollan, TH, Costa, A and Ferreira, VS (2013) A disadvantage in bilingual sentence production modulated by syntactic frequency and similarity across languages. Cognition, 129, 256263.CrossRefGoogle ScholarPubMed
Schlenker, P (1999) Propositional attitudes and indexicality: a cross categorial approach (Doctoral dissertation, Massachusetts Institute of Technology).Google Scholar
Schlenker, P (2003) A plea for monsters. Linguistics and philosophy, 26, 29120.CrossRefGoogle Scholar
Sehyr, ZS and Emmorey, K (2022) Contribution of lexical quality and sign language variables to reading comprehension. The Journal of Deaf Studies and Deaf Education, 27, 355-372.CrossRefGoogle ScholarPubMed
Serratrice, L, Sorace, A and Paoli, S (2004) Crosslinguistic influence at the syntax-pragmatics interface: Subjects and objects in English-Italian bilingual and monolingual acquisition. Bilingualism: Language and cognition, 7(3), 183205.CrossRefGoogle Scholar
Traxler, CB (2000) The Stanford Achievement Test: National norming and performance standards for deaf and hard-of-hearing students. Journal of deaf studies and deaf education, 5, 337348.CrossRefGoogle ScholarPubMed
Traxler, MJ, Banh, T, Craft, MM, Winsler, K, Brothers, TA, Hoversten, LJ, Piñar, P and Corina, DP (2021) Word skipping in deaf and hearing bilinguals: Cognitive control over eye movements remains with increased perceptual span. Applied Psycholinguistics, 42, 601630.CrossRefGoogle Scholar
Unsworth, N, Heitz, RP, Schrock, JC and Engle, RW (2005) An automated version of the operation span task. Behavior research methods, 37, 498505.CrossRefGoogle ScholarPubMed
Upton, TA and Lee-Thompson, LC (2001) The role of the first language in second language reading. Studies in second language acquisition, 23, 469495.CrossRefGoogle Scholar
Villameriel, S, Costello, B, Giezen, M and Carreiras, M (2022) Cross-modal and cross-language activation in bilinguals reveals lexical competition even when words or signs are unheard or unseen. Proceedings of the National Academy of Sciences, 119, e2203906119.CrossRefGoogle ScholarPubMed
Villameriel, S, Dias, P, Costello, B and Carreiras, M (2016) Cross-language and cross-modal activation in hearing bimodal bilinguals. Journal of Memory and Language, 87, 5970.CrossRefGoogle Scholar
Wang, Q, Lillo-Martin, D, Best, CT and Levitt, A (1992) Null subject versus null object: Some evidence from the acquisition of Chinese and English. Language acquisition, 2, 221254.CrossRefGoogle Scholar
White, L (1985) The “pro-drop” parameter in adult second language acquisition. Language learning, 35, 4761.CrossRefGoogle Scholar
Witkin, GA (2014) Clustering in lexical fluency tasks among deaf adults (Doctoral dissertation, Gallaudet University).Google Scholar
Wolbers, KA, Bowers, LM, Dostal, HM and Graham, SC (2014) Deaf writers' application of American Sign Language knowledge to English. International Journal of Bilingual Education and Bilingualism, 17, 410428.CrossRefGoogle Scholar
Wulf, A, Dudis, P, Bayley, R and Lucas, C (2002) Variable subject presence in ASL narratives. Sign Language Studies, 5476.CrossRefGoogle Scholar
Yan, M, Pan, J, Bélanger, NN and Shu, H (2015) Chinese deaf readers have early access to parafoveal semantics. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41, 254.Google ScholarPubMed
Yang, CL, Gordon, PC, Hendrick, R and Wu, JT (1999) Comprehension of referring expressions in Chinese. Language and Cognitive Processes, 14, 715743.CrossRefGoogle Scholar
Yang, CL, Gordon, PC, Hendrick, R and Hue, CW (2003) Constraining the comprehension of pronominal expressions in Chinese. Cognition, 86, 283315.CrossRefGoogle ScholarPubMed
Zucchi, A (2004) Monsters in the visual mode. Unpublished Manuscript, Università degli Studi di Milano.Google Scholar
Figure 0

Fig. 1. Example of referential loci used for referencing in American Sign Language

Figure 1

Fig. 2. Example of indexical reference, or role shift, in American Sign Language. This is a common form of reference used within natural ASL discourse.

Figure 2

Table 1. Participant characteristics and comparison*

Figure 3

Table 2. Frequency of pronouns within stimuli

Figure 4

Fig. 3. Comparison of reading times for pronouns in deaf and hearing readers during self-paced reading. Deaf readers (red) had significantly shorter overall reading times for pronouns (shown on the y-axis) as compared to hearing readers (blue).

Figure 5

Fig. 4. Comparison of reading times for pronouns with differing grammatical person during self-paced reading. Deaf readers (red) had significantly longer reading times (shown on the y-axis) for first person as compared to second and third person (shown on the x-axis). Deaf readers had significantly shorter reading times than hearing readers (blue) for all pronoun types.

Figure 6

Table 3. Model comparisons

Figure 7

Table 4. Predictor significance for model in deaf readers

Figure 8

Table 5. Predictor significance for model in hearing readers