Hostname: page-component-745bb68f8f-f46jp Total loading time: 0 Render date: 2025-01-24T08:19:44.268Z Has data issue: false hasContentIssue false

Voice processing ability predicts second-language phoneme learning in early bilingual adults

Published online by Cambridge University Press:  17 January 2025

Gaël Cordero*
Affiliation:
Department of Psychology, Faculty of Medicine and Health Sciences, Universitat Internacional de Catalunya, Barcelona, Spain
Jazmin R. Paredes-Paredes
Affiliation:
Department of Psychology, Faculty of Medicine and Health Sciences, Universitat Internacional de Catalunya, Barcelona, Spain
Manuel Perea
Affiliation:
Department of Methodology and ERI-Lectura, Universitat de València, Valencia, Spain Nebrija Research Center in Cognition, Universidad Antonio de Nebrija, Madrid, Spain
Nuria Sebastian-Galles
Affiliation:
Department of Information and Communication Technologies, Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain
Begoña Díaz
Affiliation:
Department of Psychology, Faculty of Medicine and Health Sciences, Universitat Internacional de Catalunya, Barcelona, Spain
*
Corresponding author: Gaël Cordero; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Individuals differ greatly in their ability to learn the sounds of second languages, even when learning starts early in life. Recent research has suggested that the ability to identify the idiosyncratic acoustic variations introduced into the speech stream by the speaker might be relevant for second-language (L2) phoneme learning. However, only a positive correlation between voice recognition and phoneme learning has been shown. In the present study, we investigated whether voice processing ability predicts L2 phoneme learning. We employed a battery of behavioral cognitive ability measures to assess voice processing ability and L2 phoneme learning in 57 early bilingual adults. Confirmatory factor analyses (CFAs) and structural equation modeling (SEM) revealed that voice processing ability predicts L2 phoneme learning. Our findings align with theories of speech perception that attribute a fundamental role to the analysis of voice cues and suggest that the accurate identification of speaker-specific variation is also relevant for phoneme learning.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

Highlights

  • High individual differences in voice processing and L2 phoneme learning

  • CFAs support voice processing and L2 phoneme learning being distinct abilities

  • SEMs of accuracy and reaction time data show that voice ability predicts L2 phoneme learning

1. Introduction

Anyone who has taken a second-language (L2) course will have noticed that we display considerable individual differences in language learning. Some people struggle with the most basic abilities, while others seem to absorb linguistic knowledge effortlessly. One of the most challenging aspects of learning an L2 is the acquisition of its speech sounds (i.e., phonemes), an ability subject to great individual differences, with only a minority of learners achieving high proficiency (Schmitz et al., Reference Schmitz, Díaz, Fernández Rubio and Sebastian-Galles2018; Sebastian-Galles & Baus, Reference Sebastian-Galles, Baus and Cutler2005; Sebastian-Galles & Díaz, Reference Sebastian-Galles and Díaz2012). Studies have found that individual differences in L2 phoneme command persist despite accounting for comparable experiences and opportunities to learn the L2 (Archila-Suerte et al., Reference Archila-Suerte, Bunta and Hernandez2016; Díaz et al., Reference Díaz, Mitterer, Broersma and Sebastian-Galles2012; Sebastian-Galles & Baus, Reference Sebastian-Galles, Baus and Cutler2005; Sebastian-Galles & Díaz, Reference Sebastian-Galles and Díaz2012). Yet, the learner-related factors that impact L2 phoneme command are poorly understood. A recent study (Díaz et al., Reference Díaz, Cordero, Hoogendoorn and Sebastian-Galles2022) showed that individual differences in L2 phoneme proficiency were related to the ability to recognize trained (i.e., learned) voices. Here, we tested whether voice processing abilities (operationalized as the ability to recognize and discriminate voices) can predict attained L2 phoneme learning in a sample of early bilingual adults using a battery of behavioral tests and structural equation models (SEMs).

Speech is a highly variable and complex signal. It contains both linguistic information, which reflects the message the speaker intends to transmit, and voice information, which provides cues about various characteristics of the speaker. Listeners use linguistic information to understand what is being said, while voice information is exploited for successful social interactions (Nygaard & Tzeng, Reference Nygaard, Tzeng, Pardo, Nygaard, Remez and Pisoni2021). The complexity and variability of the speech signal are largely due to these two types of information not being discreetly encoded; there is no one-to-one mapping between the percepts of phonemes and their acoustic correlates across speakers (Peterson & Barney, Reference Peterson and Barney1952). The anatomy of the vocal tract, which is responsible for speech production, is unique to each speaker. Consequently, the acoustic characteristics of each speaker’s voice are also unique. The main acoustic features that characterize voices are the average fundamental frequency, which is perceived as voice pitch, and the frequency values of formants (i.e., resonances of the vocal tract), that cause the percept of vocal timbre (Baumann & Belin, Reference Baumann and Belin2010; Ghazanfar & Rendall, Reference Ghazanfar and Rendall2008; Latinus & Belin, Reference Latinus and Belin2011). While the first and second formants (F1 and F2) are claimed to be the primary cues to determine vowel identity (Fox et al., Reference Fox, Flege and Munro1995; Yang & Fox, Reference Yang and Fox2014), higher formants have been proposed to carry most of the vocal timbre information, as they exhibit minimal within-speaker variation across vocalizations (Kitamura & Akagi, Reference Kitamura and Akagi1995). However, as stated, the spectral values of all formants are determined by the anatomy of the speaker’s vocal tract. Therefore, theories of speech perception must address how the perceptual system resolves the lack of invariance between speech-sound (i.e., phoneme) percepts and their acoustic correlates across speakers.

Many of the solutions proposed by speech perception theories to this lack of invariance problem require the accurate identification of the speaker-specific spectro-temporal changes embedded in the speech signal. Speaker normalization theories argue that speech perception is accomplished by initially identifying the acoustic idiosyncrasies introduced into the speech stream by the speaker and discarding them from further processing. Thus, only the phoneme cues that enable the recognition of the corresponding phoneme representations are retained (Choi et al., Reference Choi, Hu and Perrachione2018; Johnson & Sjerps, Reference Johnson, Sjerps, Pardo, Nygaard, Remez and Pisoni2021; Nusbaum & Magnuson, Reference Nusbaum, Magnuson, Johnson and Mullennix1997; Zhang & Chen, Reference Zhang and Chen2016). However, the specific acoustic cues onto which normalization is applied vary across theoretical proposals, such as the ratio between the F1 and F2 of vowels or the absolute fundamental frequency, and remain a matter of debate (for a review, see Persson & Jaeger, Reference Persson and Jaeger2023). Conversely, distributional (Kleinschmidt & Jaeger, Reference Kleinschmidt and Jaeger2015; McMurray & Jongman, Reference McMurray and Jongman2011) and exemplar-based (Goldinger, Reference Goldinger1998; Klatt, Reference Klatt1979; Sumner et al., Reference Sumner, Kim, King and McGowan2014) models of speech perception do not consider voice information as noise to be discarded, but rather as fundamental for speech perception. These models propose that the speech perceptual system resolves the lack of invariance between phoneme percepts and their acoustic correlates by representing voice-dependent variations of speech. While distributional models claim that listeners retain statistical distributions of the range of variability of phoneme cues across speakers, exemplar-based models propose that listeners store memory traces of actual speech segments that contain both linguistic and voice details. Thus, according to these two theoretical proposals, speaker variations of phoneme productions are accounted by either inferring the most probable outcome or by a similarity matching process, respectively. Both distributional and exemplar-based models of speech perception share the underlying assumption that exposure to speaker variability provides the speech perceptual system the capability to accurately perceive speech. Numerous studies have repeatedly shown that voice and linguistic information interact during speech processing, as contemplated by all of the enumerated theoretical proposals. The perception of synthesized ambiguous vowels is strongly dependent on the spectro-temporal characteristics of a speaker’s voice in a preceding sentence (Darwin et al., Reference Darwin, Denis McKeown and Kirby1989; Krumbiegel et al., Reference Krumbiegel, Ufer and Blank2022; Ladefoged & Broadbent, Reference Ladefoged and Broadbent1957; Miller et al., Reference Miller, Aibel and Green1984; Nearey, Reference Nearey1989; Newman & Sawusch, Reference Newman and Sawusch2009; Reinisch & Sjerps, Reference Reinisch and Sjerps2013; Sjerps et al., Reference Sjerps, McQueen and Mitterer2013, Reference Sjerps, Fox, Johnson and Chang2019) regardless of language familiarity (Sjerps & Smiljanić, Reference Sjerps and Smiljanić2013). Familiarity with a speaker is beneficial for speech comprehension in acoustically challenging scenarios, such as noisy environments or multi-talker situations (Drozdova et al., Reference Drozdova, van Hout and Scharenborg2019; Johnsrude et al., Reference Johnsrude, Mackey, Hakyemez, Alexander, Trang and Carlyon2013; Magnuson et al., Reference Magnuson, Nusbaum, Akahane-Yamada and Saltzman2021; Nygaard et al., Reference Nygaard, Sommers and Pisoni1994; Nygaard & Pisoni, Reference Nygaard and Pisoni1998; Souza et al., Reference Souza, Gehani, Wright and McCloy2013; Yonan & Sommers, Reference Yonan and Sommers2000).

A growing body of evidence suggests that voice processing ability, the capacity of a listener to identify the speaker-specific acoustic variations introduced into the speech stream, is not only relevant for speech and speaker recognition (Johnson & Sjerps, Reference Johnson, Sjerps, Pardo, Nygaard, Remez and Pisoni2021; Nygaard & Tzeng, Reference Nygaard, Tzeng, Pardo, Nygaard, Remez and Pisoni2021) but might also influence phoneme learning. The acquisition of non-native phonetic contrasts is enhanced if learnt from multiple speakers as compared to a single speaker (Bradlow et al., Reference Bradlow, Pisoni, Akahane-Yamada and Tohkura1997; Bradlow & Pisoni, Reference Bradlow and Pisoni1999; Deng et al., Reference Deng, Chandrasekaran, Wang and Wong2018; Iverson et al., Reference Iverson, Hazan and Bannister2005; Lively et al., Reference Lively, Logan and Pisoni1993, Reference Lively, Pisoni, Yamada, Tohkura and Yamada1994; Logan et al., Reference Logan, Lively and Pisoni1991; Wong, Reference Wong2014; Ylinen et al., Reference Ylinen, Uther, Latvala, Vepsäläinen, Iverson, Akahane-Yamada and Näätänen2010; Zhang et al., Reference Zhang, Cheng and Zhang2021, but see Brekelmans et al., Reference Brekelmans, Lavan, Saito, Clayards and Wonnacott2022). This benefit in L2 phoneme learning is assumed to arise from the exposure to greater acoustic–phonetic variability that multiple speakers entail. This variability would allow L2 learners to identify the acoustic properties that convey linguistic information across speakers and facilitate accurate speech perception when new speakers are encountered (Deng et al., Reference Deng, Chandrasekaran, Wang and Wong2018; Iverson et al., Reference Iverson, Hazan and Bannister2005; Ylinen et al., Reference Ylinen, Uther, Latvala, Vepsäläinen, Iverson, Akahane-Yamada and Näätänen2010). The relevance of voice processing ability for language learning processes was also reported by Houston and Jusczyk (Reference Houston and Jusczyk2000), who found that familiarity with characteristics of a speaker contributes to speech segmentation during early language learning. Infants were familiarized with isolated words spoken by one speaker and then presented with passages enunciated by a different speaker that occasionally contained the familiarized words. Seven-and-a-half-month-old infants recognized the trained words when familiarized and tested with speakers of the same sex but were unable to generalize across sexes. Houston and Jusczyk (Reference Houston and Jusczyk2000) proposed that the ability to accurately disentangle voice information from linguistic information develops in parallel with language acquisition.

Additional evidence advocating for the importance of voice processing ability for language learning is provided by research in dyslexia, a developmental disorder characterized by difficulties in reading and spelling despite normal intelligence, neurological integrity, and educational opportunities. Current conceptualizations attribute dyslexia to an underlying phonological deficit that impedes the optimal association between phonemes and their respective characters (Ramus, Reference Ramus2003). Behavioral studies have established an association between dyslexia and difficulties in voice recognition (Perea et al., Reference Perea, Jiménez, Suárez-Coalla, Fernández, Viña and Cuetos2014; Perrachione et al., Reference Perrachione, Del Tufo and Gabrieli2011). Perea et al. (Reference Perea, Jiménez, Suárez-Coalla, Fernández, Viña and Cuetos2014) found that children and adults with dyslexia exhibited an impairment to recognize speakers in both the language for which they had previous phoneme representations, i.e., their native language (L1), and an unfamiliar language, leading them to suggest that poor voice recognition skill is a trait of dyslexia (Perea et al., Reference Perea, Jiménez, Suárez-Coalla, Fernández, Viña and Cuetos2014, but see Perrachione et al., Reference Perrachione, Del Tufo and Gabrieli2011). This interpretation is in line with electrophysiological work that showed that children with dyslexia exhibit a reduced encoding of features related to pitch as compared to typically developing children (Chandrasekaran et al., Reference Chandrasekaran, Hornickel, Skoe, Nicol and Kraus2009) and suggests that deficient voice processing ability might underlie the phonological deficit that characterizes dyslexia.

Further evidence that suggests that phoneme learning and voice processing are related abilities is provided by the advantage in voice recognition bilinguals exhibit compared to monolinguals when discriminating speakers in an unfamiliar language (Fecher & Johnson, Reference Fecher and Johnson2019, Reference Fecher and Johnson2022; Levi, Reference Levi2019). Fecher and Johnson (Reference Fecher and Johnson2019, Reference Fecher and Johnson2022) proposed that a richer phonetic upbringing had given rise to bilingual infants possessing higher sensitivity to phonetic cues, thus facilitating speaker recognition despite the absence of reliable phoneme representations. While a richer phonetic upbringing may underlie bilinguals having an advantage in voice recognition over monolinguals, bilingualism cannot account for the positive correlation between individual differences in voice recognition and L2 phoneme learning a recent study observed, since the sample was entirely composed of early bilingual adults with similar opportunities to learn the L2 (Díaz et al., Reference Díaz, Cordero, Hoogendoorn and Sebastian-Galles2022). This study took advantage of the considerable variance displayed by Spanish (L1)–Catalan (L2) early bilinguals in their capacity to discriminate the Catalan-specific vowel contrast /e/ - /ε/, since native Spanish speakers perceive both phonemes as the Spanish vowel /e/ (Bosch et al., Reference Bosch, Costa and Sebastian-Galles2000; Pallier et al., Reference Pallier, Bosch and Sebastian-Galles1997, Reference Pallier, Colomé and Sebastian-Galles2001; Sebastian-Galles et al., Reference Sebastian-Galles, Rodríguez-Fornells, de Diego-Balaguer and Díaz2006; Sebastian-Galles & Soto-Faraco, Reference Sebastian-Galles and Soto-Faraco1999). This phenomenon, where two L2 speech sounds are perceived as a single phoneme from the native language, is known as perceptual assimilation and constitutes one of the most challenging scenarios L2 speakers face (Best & Tyler, Reference Best, Tyler, Bohn and Munro2007; Flege, Reference Flege and Strange1995). The bilinguals studied by Díaz et al. (Reference Díaz, Cordero, Hoogendoorn and Sebastian-Galles2022) were selected from a previous study (Schmitz et al., Reference Schmitz, Díaz, Fernández Rubio and Sebastian-Galles2018) according to whether they had exhibited either native-like or below-native performance in three behavioral tasks that evaluated their ability to perceive the L2-specific vowel contrast /e/ - /ε/. The bilinguals were administered a voice recognition task (adapted from Perea et al, Reference Perea, Jiménez, Suárez-Coalla, Fernández, Viña and Cuetos2014; Perrachione et al., Reference Perrachione, Del Tufo and Gabrieli2011), which required them to learn associations of voices speaking in the participants’ first language and cartoon avatars while their behavioral and electroencephalographic responses were registered.

In addition to the voice recognition task, Díaz et al. (Reference Díaz, Cordero, Hoogendoorn and Sebastian-Galles2022) administered a non-word association task (NWAT) which required participants to learn associations between auditory non-words enunciated by a single speaker and cartoon avatars. The task served to obtain a behavioral measure of the participants’ general capacity to learn audiovisual associations, an ability that might have influenced participants’ performance in the voice recognition task. The behavioral data showed that voice recognition ability positively correlated with attained L2 phoneme discrimination, while none of these two measures correlated with NWAT. Analysis of the electroencephalographic data revealed a positive correlation between the brain activity during voice recognition and the behavioral L2 phoneme discrimination ability at two time windows: 300–340 and 880–1140 ms. These findings were in line with previous studies, which had reported voice recognition eliciting positive brain electrophysiological responses 300 ms after stimuli onset (Humble et al., Reference Humble, Schweinberger, Dobel and Zäske2019; Schweinberger, Reference Schweinberger2001; Zäske et al., Reference Zäske, Volberg, Kovacs and Schweinberger2014, Reference Zäske, Limbach, Schneider, Skuk, Dobel, Guntinas-Lichius and Schweinberger2018). The positive relation between voice recognition (at the behavioral and electroencephalographic levels) and L2 phoneme discrimination ability evidenced a common individual variance for L2 phoneme and voice recognition processes. The new-found relation between these two seemingly independent processes opened up the possibility of voice processing abilities impacting the final attainment of L2 phonemes. Díaz et al. (Reference Díaz, Cordero, Hoogendoorn and Sebastian-Galles2022) suggested that the correlation between voice recognition ability and L2 phoneme learning might stem from L2 learners with proficient voice processing skills being better equipped to disentangle voice and linguistic information during learning, resulting in finer-tuned L2 phoneme representations and thus greater accuracy when detecting L2 phonemes. However, this proposal was limited by the correlational nature of the evidence.

In the present study, we examined if the ability to accurately identify the acoustic idiosyncrasies introduced into the speech stream by a speaker (i.e., voice processing ability) predicts L2 phoneme learning using structural equation modeling (SEM, for a list of all acronyms used in this article, see Appendix 1). We employed a battery of behavioral tests to assess voice processing ability and attained L2 phoneme learning in a sample of 57 early Spanish (L1)–Catalan (L2) bilingual adults with similar characteristics as the participants in Díaz et al. (Reference Díaz, Cordero, Hoogendoorn and Sebastian-Galles2022). Voice processing ability was operationalized as the ability to recognize and discriminate speakers. We assessed participants’ voice recognition skills using three different tasks. The first of these three was a voice recognition task in the native language (L1) of the participants, Spanish, which was identical to the task employed in Díaz et al. (Reference Díaz, Cordero, Hoogendoorn and Sebastian-Galles2022). This L1 voice recognition task consisted in training participants to recognize five voices and subsequently testing voice recognition accuracy. Recognizing voices in one’s L1 is facilitated by the prior phonological and semantic knowledge of the spoken language (Yu et al., Reference Yu, Zhou, Zhang, Li, Li and Wang2023) and results in greater accuracy as compared to recognizing voices in an unknown language (Lx) (Perea et al., Reference Perea, Jiménez, Suárez-Coalla, Fernández, Viña and Cuetos2014; Perrachione et al., Reference Perrachione, Del Tufo and Gabrieli2011). To obtain a richer characterization of the voice processing ability of participants than in Díaz et al. (Reference Díaz, Cordero, Hoogendoorn and Sebastian-Galles2022), we also administered an Lx voice recognition task similar to the one employed by Perea et al. (Reference Perea, Jiménez, Suárez-Coalla, Fernández, Viña and Cuetos2014) in which the voices spoke Chinese. By using both L1 and Lx voice recognition tasks, we aimed to capture participants’ ability to identify voice cues that are intertwined with linguistic information in two different situations; when prior linguistic knowledge facilitated the identification of voice cues (L1 voice recognition task) and when prior linguistic knowledge did not facilitate voice recognition (Lx voice recognition task). Lastly, to deepen our understanding of voice processing abilities, we assessed participants’ ability to identify speaker-specific cues embedded in the speech signal in the absence of linguistic-dependent acoustic variations. For this purpose, we designed a novel voice discrimination task (VDT) which required participants to evaluate whether two emotional interjections (Belin et al., Reference Belin, Fillion-Bilodeau and Gosselin2008) had been produced by the same or different unfamiliar speakers. We employed affect bursts as stimuli due to emotional tone being a within-person source of non-linguistic variation which drastically modulates the spectro-temporal characteristics of the speech signal (Lavan et al., Reference Lavan, Burston and Garrido2019a). These three voice tasks therefore evaluated participants’ voice processing abilities in three situations that varied in their engagement of speech processes: linguistic information present and familiar (i.e., L1 voice recognition task), linguistic information present but unfamiliar (i.e., Lx voice recognition task), and linguistic information not present (voice discrimination task). The participants’ L2 phoneme learning ability was quantified using two tasks that evaluated L2 phoneme knowledge at the sub-lexical and lexical levels, respectively: a categorization task (CT) of synthetic vowels (Pallier et al., Reference Pallier, Bosch and Sebastian-Galles1997) and an auditory lexical decision task (Schmitz et al., Reference Schmitz, Díaz, Fernández Rubio and Sebastian-Galles2018; Sebastian-Galles et al., Reference Sebastian-Galles, Echeverría and Bosch2005; Sebastian-Galles & Baus, Reference Sebastian-Galles, Baus and Cutler2005). All tasks measured accuracy and reaction time (RT). While both accuracy scores and RT capture effective cognitive processing, they are qualitatively different measures. Accuracy scores capture how similar the decision alternatives are to each other and how effectively the correct option can be identified. RT measures the speed with which a participant identifies the correct option. Perceptual decision-making models have highlighted the need to study both measures when investigating individual differences since, surprisingly, they tend to exhibit low correlation on an individual level (Ratcliff et al., Reference Ratcliff, Thapar and McKoon2010; Ratcliff et al., Reference Ratcliff, Smith and McKoon2015a, Reference Ratcliff, Thompson and McKoon2015b). Drawing firm conclusions in behavioral studies therefore necessitates interpreting both measures (Ratcliff et al., Reference Ratcliff, Smith and McKoon2015a).

We conducted confirmatory factor analysis (CFA) to investigate whether both the accuracy and RT data, modeled separately, were represented more adequately by two related latent variables (i.e., voice processing ability and L2 phoneme learning), as hypothesized, or rather by a single latent variable (i.e., general speech ability). After confirming that the model with two latent variables provided an overall better fit of the data, we proceeded to investigate our main hypothesis that voice processing ability predicted L2 phoneme learning with SEM. A positive result would provide insight into the high variability early bilingual adults display in their command of L2 phonemes and suggest that voice processing influences L2 learning.

2. Methods

2.1. Sample size estimation

The minimum sample size required for this study was estimated using an a priori power analysis (Hancock & Mueller, Reference Hancock and Mueller2013). Using a tool designed for SEM studies (Soper, Reference Soper2023), we calculated the minimum sample size as a function of the number of observed and latent variables (5 and 2, respectively), anticipated effect size (β = .61, based on Díaz et al., Reference Díaz, Cordero, Hoogendoorn and Sebastian-Galles2022), desired probability (p = .05) and statistical power (π = .80). This analysis determined that a minimum of 12 participants was necessary to detect an effect. However, to ensure the convergence of the CFAs and SEMs, we aimed to collect the data of a minimum of 50 participants, following the recommendation of Bentler and Chou (Reference Bentler and Chou1987) of having a minimum of 10 participants per indicator.

2.2. Participants

The sample of this study was composed of 57 Spanish–Catalan bilingual adults (40 female; mean age 21 years; age range 18–26) born and raised in the metropolitan area of Barcelona in Catalunya, an autonomous community of Spain where Spanish and Catalan are co-official languages. The L1 of all participants was Spanish; they had been raised in monolingual Spanish families and had not been systematically exposed to Catalan until the age of 4 years, when mandatory bilingual schooling begins. All participants were highly fluent speakers of Catalan; from kindergarten on, they had received mandatory bilingual education. At the time of testing, all participants were pursuing or had obtained a university degree in Catalonia, indicating that they had completed mandatory bilingual schooling, a requirement to access higher education.

Participants were selected using an online survey in Google Forms that collected information concerning their personal history (place of birth, place/s of residence, etc.) and language profile (L1, L2, age of acquisition of each spoken language, current use of each spoken language, etc.) of the respondent and their extended family. This was done to ensure that the participants had no substantial experience with any language other than Spanish during their initial years of life (0–4 years of age) and that systematic exposure to Catalan only began upon commencing mandatory bilingual schooling. Participants answered free-response questions inquiring about the language(s) employed to communicate with each family in their early childhood environment. All participants reported exclusively communicating in Spanish with both of their parents and other regular caretakers. None of the participants had extended family members or caretakers from the eastern region of Andalusia nor the Region of Murcia, two autonomous communities in the south of Spain. This was avoided because the Spanish dialects in these regions employ the phoneme /ε/ and the standard Spanish /e/ (Sanders, Reference Sanders1994; Soriano, Reference Soriano2012). Participants exposed to one of these Spanish dialects during their early infancy would have had an advantage in distinguishing the two phonemes we exploited to evaluate L2 phoneme learning.

None of the participants possessed substantial musical training, as defined by a previous study (Kaganovich et al., Reference Kaganovich, Kim, Herring, Schumaker, MacPherson and Weber-Fox2013). Substantial musical training consisted in meeting a minimum of two of the three following criteria: (1) the onset of musical training having occurred before the age of 12 years; (2) having partaken in musical training for a minimum of 5 years; and (3) being part of a musical group or ensemble, either currently or in the past. None of the participants had received a clinical diagnosis of a hearing problem, learning disability, or neurological impairment. Of the 1123 respondents that completed the online questionnaire, only 68 were eligible for inclusion in the final sample, of which 57 accepted to participate in this study. Participants provided their written informed consent and were monetarily compensated for their time (10 €). The Medical Faculty and Health Sciences Ethics Committee of the Universitat Internacional de Catalunya approved the procedures (Protocol no.º: PSI-2020-05).

2.3. Materials

A battery composed of six behavioral tasks was employed to evaluate the participants’ voice processing ability, L2 phoneme learning, and general audiovisual learning capacities. Voice processing ability was assessed with the L1 voice recognition task (L1 VRT), the Lx voice recognition task (Lx VRT), and the VDT. The indicators of L2 phoneme learning were a CT and a lexical decision task (LDT). General audiovisual learning was evaluated with the NWAT. All tasks registered both accuracy and RT data and were programmed and executed in MATLAB (Version R2021a, MathWorks, Inc., Natick, MA USA) using the Psychophysics Toolbox extensions (3.0.18; Brainard, Reference Brainard1997; Pelli, Reference Pelli1997). Here, we present a summarized description of the tasks. A detailed description can be found in the Supplementary Materials.

2.3.1. Voice recognition tasks (VRTs)

The L1 VRT and the Lx VRT, adapted from Perea et al. (Reference Perea, Jiménez, Suárez-Coalla, Fernández, Viña and Cuetos2014), followed an identical procedure. These two tasks solely differed in the stimuli they employed. In the L1 VRT, the auditory stimuli consisted of 10 Spanish sentences recorded by 5 Spanish native speakers, while 10 Chinese sentences recorded by 5 Chinese native speakers were employed in the Lx VRT. Ten female avatars were created, of which five were employed in each VRT. The VRTs trained participants to associate voices with avatars and then tested the learning that the participants had attained. Participants were taught the associations between voices and avatars in two phases, each composed of 25 trials: the training and the short test. The trials of the training followed an ABX structure; two voice–avatar pairings were sequentially presented. One of the two voices was then repeated while the five avatars were displayed. Participants had to indicate as fast as possible by means of a button press which of the five avatars the repeated voice corresponded to. Feedback was provided concerning the participants’ response accuracy, and the correct avatar was displayed on the screen. The trials of the short test consisted in the presentation of an auditory stimulus accompanied by the five avatars. Participants indicated as fast as they could which of the five avatars was associated with the presented voice. As in the training, feedback was provided after each delivered response. The test phase, composed of 50 trials, followed the same structure as the short test but no feedback was provided. Participants were trained and tested on different sentences.

2.3.2. Voice discrimination task (VDT)

The Montreal Affective Voices set (Belin et al., Reference Belin, Fillion-Bilodeau and Gosselin2008) was employed as the stimuli of the VDT. This set is composed of 10 different speakers enunciating nine affective interjections using the vowel /ɑ/. The VDT followed an AX discrimination design: Two auditory stimuli were sequentially presented and participants indicated via button press as fast as they could if the same or different speakers had enunciated the two vocalizations. In half of the trials, both stimuli had been enunciated by the same speaker, while in the other half, they had been enunciated by different speakers. Fifty-two trials composed the VDT.

2.3.3. Categorization task (CT)

The CT followed the design presented by Pallier and collaborators (1997). The stimuli consisted of a continuum of seven synthesized vowel stimuli between the Catalan vowels /e/ and /ε/. In 63 trials (nine trials per stimuli), participants had to respond as fast as they could via button press if the vowel they heard was perceived as the first vowel in the Catalan word Pere (/perə/, the name Peter) or as the first vowel in pera (/pεrə/, which means pear).

2.3.4. Lexical decision task (LDT)

The LDT employed in this study was from Sebastian-Galles et al. (Reference Sebastian-Galles, Echeverría and Bosch2005). The stimuli consisted of 344 auditory stimuli (experimental and control) enunciated by a native Catalan speaker. The experimental stimuli included 132 words containing one of the two phonemes from the targeted Catalan contrast (i.e., /e/ or /ε/) and 132 non-words which were designed by substituting the /e/ and /ε/ present in the real words with the other member of the phoneme pair. Eighty control stimuli, 40 Catalan words and 40 non-words were also employed. Control non-words were derived from a set of Catalan words different from the control and experimental words. These control non-words were created by changing a vowel phoneme in this separate set of Catalan words with a phoneme employed in both Spanish and Catalan. In each of the 212 trials, participants were presented with an auditory stimulus and had to respond via button press if the stimulus was part of the Catalan lexicon. The experimental stimuli were distributed between two lists to ensure that participants only heard one member of the same word pair. Both lists included all control stimuli, and their use was counterbalanced across participants.

2.3.5. Non-word association task (NWAT)

The NWAT was initially introduced in Díaz et al. (Reference Díaz, Cordero, Hoogendoorn and Sebastian-Galles2022). Six non-words enunciated by a single native Spanish speaker constituted the auditory stimuli for this task while six avatars constituted the visual stimuli. The NWAT sought to train and test participants’ ability to learn audiovisual associations. It was composed of two phases: a training and a test. Each of the 12 trials of the training phase consisted in the simultaneous presentation of a non-word–avatar pairing. The test trials, a total of 48, consisted of the presentation of a non-word, while the six avatars were displayed. Participants indicated via button press which avatar was associated with the presented non-word as fast as possible.

2.4. Procedure

The six tasks were administered in a single one-hour-long experimental session. The tasks were presented to all participants in the following order: Lx VRT, LDT, VDT, L1 VRT, NWAT, and, lastly, the CT. The order of task presentation was arbitrary; however, the order of the tasks was maintained constant throughout for participants to avoid task-order effects playing a role in individual task performance. Instructions for each task were displayed via text. Any doubts the participants had were resolved by the experimenters before commencing each task. Instructions were delivered in Catalan for the LDT and the CT and in Spanish for the other four tasks. Participants were instructed to provide their responses with their dominant hand and to keep their response fingers over the response buttons. For all participants, the six tasks were presented on an HP EliteBook 840 G7 Notebook PC with Audio-Technica ATH-PRO7x headphones, ensuring a consistent and comfortable audio level. Participants were tested individually in sound-attenuated rooms at the Psychology and Psychiatry University Clinic and Digital Media Studios of the Universitat Internacional de Catalunya and at the laboratories of the Center for Brain and Cognition of the Pompeu Fabra University.

2.5. Data analysis

We investigated whether voice processing ability predicted L2 phoneme learning using SEM, a statistical methodology that systematically analyzes the relationship among several variables. Following Brown (Reference Brown2015), CFAs were conducted prior to the SEMs. CFA assesses the relationships between observed measures and latent variables. CFA allows for the validation of the hypothesized latent constructs being manifested through the employed indicators. Similar to a previous study (Díaz et al., Reference Díaz, Cordero, Hoogendoorn and Sebastian-Galles2022), we tested whether general audiovisual learning abilities influenced the participants’ performance in the VRTs by computing Pearson’s correlations between the accuracy scores and RT of the NWAT and the VRTs. Mplus Version 8.8. Demo (Mplus. Statistical Analysis with Latent Variables, 2017) was used to estimate the CFAs and SEMs. All other analyses were conducted with R 4.2.2 (R Core Team, 2019) and RStudio 2022.12.0 (RStudio Team, 2020).

Each task’s accuracy and RT scores were computed from trials where participants delivered their responses within a specific time window. These time windows were designed to exclude responses provided before perceptual processing while including responses delivered up to three-and-a-half seconds after mean stimuli duration, similar to one of our previous studies (Sebastian-Galles et al., Reference Sebastian-Galles, Echeverría and Bosch2005). The time windows for each task were as follows: L1 VRTs: 250–7500 ms; Lx VRT: 250–8500 ms; VDT: 250–5000 ms; CT: 250–4000 ms; LD: 250–4000 ms; and NWAT: 250–4000 ms. Following these criteria, the following percentage of data was discarded for each task: L1 VRT: 0.70%; Lx VRT: 3.12%; VDT: 0.67%; CT: 3.07%; LD: 2.53%; and NWAT: 5.52%. Due to technical malfunctions, the LDT data of two participants were not registered. Under the assumption of data missing at random, multiple imputations by chained equations were performed with the R package mice. Subsequently, multivariate normality was assessed using the Mahalanobis distance (D2M) and computed with the R stat function. D2M was calculated for each participant’s responses to the five experimental tasks, and its statistical significance was tested with χ 2 at a significant level of .001 (Kline, Reference Kline2015).

Accuracy scores were computed for each participant and each task. For the VRTs, we computed the proportion of accurate responses delivered, following studies which have previously employed voice recognition tasks (Díaz et al., Reference Díaz, Cordero, Hoogendoorn and Sebastian-Galles2022; Perea et al., Reference Perea, Jiménez, Suárez-Coalla, Fernández, Viña and Cuetos2014; Perrachione et al., Reference Perrachione, Del Tufo and Gabrieli2011). For the VDT (see Table A1 in Appendix 2 for descriptive statistics of the proportion of correct responses), since it aimed to evaluate the ability of participants to discriminate between pairs of stimuli, we computed the d’, an index of discriminability (see Table A2 in Appendix 3 for mean proportion of hits and false alarms) derived from signal detection theory (McNicol, Reference McNicol2005; Snodgrass & Corwin, Reference Snodgrass and Corwin1988; Stanislaw & Todorov, Reference Stanislaw and Todorov1999). Accuracy scores for the CT were computed as in previous studies (Schmitz et al., Reference Schmitz, Díaz, Fernández Rubio and Sebastian-Galles2018; Sebastian-Galles et al., Reference Sebastian-Galles, Echeverría and Bosch2005; Sebastian-Galles & Baus, Reference Sebastian-Galles, Baus and Cutler2005). We sought to obtain a measure that reflected if participants could perceive the difference between the /e/ stimuli (steps 1 and 2) and the /ε/ stimuli (steps 6 and 7). For this, the average /e/ responses to steps 6 and 7 were subtracted from the average /e/ responses of steps 1 and 2. Thus, high positive scores reflect a good separation of /e/ and /ε/, scores close to zero reflect that participants did not respond differently to steps 1 and 2 than to steps 6 and 7, and negative scores indicate that participants’ responses showed a reverse pattern. Negative CT scores were assumed to originate from responses systematically delivered in reverse, which necessitates the capacity to perceive the difference between phoneme categories. Thus, the CT scores were transformed into absolute values. For the LDT, the mean accuracy for the experimental words was computed (see Table A1 in Appendix 2). Previous studies that had used the same LDT with the same population had computed the A’ score, a non-parametric unbiased index of sensitivity (McNicol, Reference McNicol2005; Snodgrass & Corwin, Reference Snodgrass and Corwin1988; Stanislaw & Todorov, Reference Stanislaw and Todorov1999), due to the participant’s strong bias to consider most experimental non-words as real words (Schmitz et al., Reference Schmitz, Díaz, Fernández Rubio and Sebastian-Galles2018; Sebastian-Galles et al., Reference Sebastian-Galles, Echeverría and Bosch2005; Sebastian-Galles & Baus, Reference Sebastian-Galles, Baus and Cutler2005). After confirming that our participants showed a high rate of false alarms for the experimental stimuli of the LDT (see Table A2 in Appendix 3), consistent with previous studies (Schmitz et al., Reference Schmitz, Díaz, Fernández Rubio and Sebastian-Galles2018; Sebastian-Galles et al., Reference Sebastian-Galles, Echeverría and Bosch2005; Sebastian-Galles & Baus, Reference Sebastian-Galles, Baus and Cutler2005), we computed A’ scores for the LDT. A’ ranges between a score of 0.5 (random response) and 1.0 (perfect discrimination). To ensure high L2 lexical knowledge, we excluded participants with an A’ < 0.8 in the control trials of the LDT. Lastly, for the NWAT, we employed the proportion of accurate responses as the accuracy score, following the study in which this task was introduced (Díaz et al., Reference Díaz, Cordero, Hoogendoorn and Sebastian-Galles2022). RT scores for all tasks and participants resulted from the mean average of the RT corresponding to trials in which the correct response was delivered.

Separate CFAs and SEMs were constructed for the accuracy scores and the RT data. The model parameters of the SEMs and CFAs were estimated using the robust maximum likelihood estimator, which does not rely on the assumption of a normal distribution (Kline, Reference Kline2015). While theoretically the tasks we employed are indicators of two different, yet related, constructs (i.e., voice processing ability and L2 phoneme learning), we also tested the possibility of a single-latent variable model providing an adequate fit of the data to rule out a possible explicative model that might be supported statistically. We employed the Akaike information criterion (AIC) to compare between the models with two latent variables (i.e., voice processing ability and L2 phoneme learning) and the models with a single latent variable (i.e., general speech ability) (Akaike, Reference Akaike, Parzen, Tanabe and Kitagawa1998). The chi-square test of model fit (χ 2) was considered significant at p < .05. A significant result of this statistic would indicate model misfit, reflecting a deviation between the population covariance structure and the model-implied covariance structure (Kline, Reference Kline2015). Goodness of fit of the models was also assessed via two indices which are robust in models with relatively small degrees of freedom, as in the present study (Shi et al., Reference Shi, DiStefano, Maydeu-Olivares and Lee2022): the comparative fit index (CFI) and the standardized root mean residual (SRMR). CFI compares the fit of the specified model to a baseline null model in which the latent variables are unrelated by constraining the covariance between the latent variables to zero. SRMR represents the average squared deviation between the observed and reproduced covariances. Following the recommendation of Hu and Bentler (Reference Hu and Bentler1999), the following values in the indices were interpreted as indicating a good fit: CFI ≥ .90 and SRMR ≤ .08. For completeness, we report the root mean square error of approximation (RMSEA), a measure of model misfit due to model misspecification commonly employed in models with large degrees of freedom, though not recommended for models with small degrees of freedom as those presented here (Kenny et al., Reference Kenny, Kaniskan and McCoach2015).

3. Results

3.1. Tasks’ results

All indicators exhibited considerable variability, suggesting that the tasks we employed successfully captured individual differences (Figures 1 and 2). The skewness and kurtosis values were within the thresholds suggested by Hancock and Mueller (Reference Hancock and Mueller2013) (i.e., absolute values of 2 and 7, respectively) for conducting CFAs and SEMs (see Table 1). Covariance matrices were generated (see Tables 2 and 3) as part of the standard procedure of conducting CFAs and SEMs (Kline, Reference Kline2015). All participants attained high accuracy scores in the control trials of the LDT (M = 0.95; SD = 0.04; range = 0.83–0.99), and therefore, no participant was excluded from the analysis due to having low L2 lexical knowledge. Multivariate normality was assessed using D2M to rule out the possibility of disturbances caused by potential multivariate outliers. No multivariate outliers were identified for the accuracy scores, and all participants were included in the CFAs and SEMs. For the RT data, a single case was identified as a multivariate outlier following the D2M criteria and was excluded from the RT models.

Figure 1. Accuracy scores of the indicators of voice processing ability (Spanish voice recognition, Chinese voice recognition, and voice discrimination) and L2 phoneme learning (categorization and lexical decision). Note that different accuracy transformed scores are depicted and direct visual comparison between the tasks is discouraged.

Figure 2. RT for the indicators for voice processing ability (Spanish voice recognition, Chinese voice recognition, and voice discrimination) and L2 phoneme learning (categorization and lexical decision).

Table 1. Descriptive statistics of the accuracy scores and reaction times of the indicators

Indicators of voice processing ability are Spanish voice recognition, Chinese voice recognition, and voice discrimination. Indicators of L2 phoneme learning are categorization and lexical decision. ms = milliseconds.

Table 2. Covariance matrix of the accuracy score data

Table 3. Covariance matrix of the reaction time data (all indicators presented in ms).

We did not expect performance in the NWAT (accuracy mean = 0.7; accuracy SD = 0.26; RT mean = 1589.15 ms; RT SD = 252.89 ms) to correlate significantly with performance in the VRTs, since these tasks were designed to capture individual differences of different abilities (i.e., general audiovisual learning and voice recognition abilities, respectively). No correlation between these tasks was observed in a previous study (Díaz et al., Reference Díaz, Cordero, Hoogendoorn and Sebastian-Galles2022). We ascertained that individual differences in general audiovisual learning abilities were not related to performance in the VRTs by computing Pearson’s correlation coefficients between the scores of the VRTs and those of the NWAT. Performance in the NWAT did not correlate with L1 VRT measures (accuracy: r = .16; p = .221; RT: r = .19; p = .151) nor Lx VRT (accuracy: r = .14; p = .298; RT: r = .13; p = .336), suggesting that individual differences in general audiovisual learning abilities were not related to performance in the VRTs. As a result, the NWAT data was not included in subsequent analyses. Given the dominance of female participants in our sample, we ascertained that gender did not influence participants’ performance in the indicators of voice processing ability and L2 phoneme learning using a series of Welch’s t-tests for unequal sample sizes. No comparison between genders approached statistical significance (all ps > 0.1; see Table A3 in Appendix 4).

3.2. Confirmatory factor analyses (CFAs)

CFAs were computed to evaluate whether the accuracy scores and RT data captured the latent constructs as intended. We tested whether voice processing ability and L2 phoneme learning could be modeled as distinct but related constructs. Additionally, we modeled the data into a single-latent-variable structure to test the possibility of this competing model. The CFAs with two related latent variables (see Figure 3) showed that the accuracy scores in the L1 VRT and Lx VRT were valid indicators of voice processing ability (both p < .001). While VDT accuracy did not significantly represent voice processing ability (p = .191), RT in this same task did (p < .001). Furthermore, RT in L1 VRT and Lx VRT represented voice processing ability (p < .001). Concerning L2 phoneme learning, both the accuracy scores and RT in the CT and the LDT represented this latent construct (all p < .001). Voice processing ability and L2 phoneme learning were correlated in both the accuracy and the RT model. The chi-square test of model fit (χ 2) was not significant for either CFA, indicating that the models with two related latent variables provided an adequate fit of the data. The CFI and SRMR indicated that both the accuracy CFA and the RT CFA met the established criteria for goodness of fit (Hu and Bentler, Reference Hu and Bentler1999) (see Table 4).

Figure 3. Accuracy (3A) and RT (3B) CFAs with two correlated latent variables. Paths connecting the latent variables (circles) are the correlations between these constructs. The values between the latent variables and the manifest variables (squares) represent the standardized loadings of each task onto the latent variable. All loadings were significant at the p < .001 except for voice discrimination (VDT) to voice processing ability (p = .191) in the accuracy CFA. **p < .01, ***p < .001.

Table 4. Goodness-of-fit indices’ results of the CFAs

Figure 4. Accuracy (4A) and RT (4B) CFAs with a single latent variable. We present 4B for informational purposes only, since the RT data are misrepresented by this model (see Table 4). The values between the latent variable (circle) and the manifest variables (squares) represent the standardized loadings of each task onto the latent variable. All loadings were significant except for voice discrimination (VDT) in the accuracy CFA (p = .136). ** p < .01, *** p < .001.

The value of the χ 2 test indicated that the single-latent CFA modeled with the accuracy scores (see Figure 4) fitted the data adequately (p > .05). However, the single-latent CFA modeled with the RT data exhibited significant model misfit (χ 2(5) = 13.368; p < .05), suggesting that the model could not adequately represent the data (see Table 4). All accuracy scores of all tasks significantly represented general speech ability (p < .005), with the sole exception of the VDT (p = .139). Fit indicators for this single-latent CFA exhibited adequate fit results following the criterion suggested by Hu and Bentler (Reference Hu and Bentler1999). Comparison of the Akaike Information Criterion (AIC) for models based on accuracy scores suggested that the CFA with a single latent variable provided a more adequate representation of the accuracy scores than the CFA with two latent variables (see Table 4). However, the single-latent CFA model did not adequately fit the RT data while the CFAs with two latent variables showed adequate fit for both the accuracy scores and the RT data. Hence, modeling voice processing ability and L2 phoneme learning as distinct but related constructs provided an overall better characterization of the complete dataset.

3.3. Structural equation models (SEMs)

We investigated whether voice processing ability predicted L2 phoneme learning with SEMs. The similarity between the procedures of the VRTs motivated us to release the covariate parameter between them when estimating the models. The results of the SEM analyses were in line with CFA findings. For both the accuracy and RT models, all measures of voice processing ability loaded onto said factor, with the sole exception of the VDT accuracy, with a loading that was close to significance (p = .066). Both accuracy and RT measures of L2 phoneme learning are loaded onto an L2 phoneme learning factor. Voice processing ability predicted L2 phoneme learning in both the model that included the accuracy scores (p < .005) and the model that included the RT data (p < .001) (see Figure 5). The goodness-of-fit indicators employed to evaluate the models met the criteria proposed by Hu and Bentler (Reference Hu and Bentler1999), indicating that the data were well represented by the models (see Table 5).

Figure 5. Accuracy (5A) and RT (5B) SEMs showing the effect from the latent variable voice processing ability over L2 phoneme learning. The values between the latent variables (circles) and their respective manifest variables (squares) represent the standardized loadings of each task onto the corresponding latent variable. All loadings were significant, except for VDT to voice processing ability in 5A, which approached significance (p = .066). A dashed line represents non-significant results. * p < .05, ***p < .001.

Table 5. Goodness-of-fit indices’ results of the accuracy and RT SEMs

4. Discussion

We investigated whether individual differences in voice processing ability provided a statistically significant prediction regarding L2 phoneme learning proficiency. To test this hypothesis, we exploited the variance Spanish (L1)–Catalan (L2) early bilinguals display in their capacity to discriminate the Catalan-specific vowel contrast /e/ - /ε/. We employed a battery of behavioral tests to assess voice processing ability and L2 phoneme learning in a sample of 57 early bilingual adults. Performance in all indicators exhibited considerable variability, suggesting that the tasks we employed successfully captured individual differences. We employed CFA to evaluate whether the accuracy scores and RT data captured two distinct latent constructs, as hypothesized, or a single latent variable. The model with two related latent variables showed a good fit of both the accuracy and RT data while the model with a single latent variable only fitted the accuracy data. Subsequent SEMs incorporating two latent variables for both accuracy scores and RT data confirmed that voice processing ability is a reliable predictor of L2 phoneme learning in early bilingual adults. Drawing on various theories of speech perception, in the following paragraphs, we discuss the nature of the relationship between voice processing and L2 phoneme learning. We also consider how voice processing abilities may relate to language learning in different stages of life, such as learning an L2 as an adult and acquiring a native language. Furthermore, we offer some considerations for future studies that seek to further investigate the influence of voice processing abilities on language learning.

Our findings suggest that the ability of a listener to identify the idiosyncratic acoustic variations introduced into the speech stream by the speaker’s voice, an ability that theoretical proposals of native speech perception consider indispensable (Johnson & Sjerps, Reference Johnson, Sjerps, Pardo, Nygaard, Remez and Pisoni2021; Nygaard & Tzeng, Reference Nygaard, Tzeng, Pardo, Nygaard, Remez and Pisoni2021), relates to L2 phoneme learning ability. It should be noted that theoretical models of non-native speech perception do not address how non-native listeners cope with the lack of invariance of speech sounds across speakers (Best, Reference Best, Good man and Nusbaum1994; Best & Tyler, Reference Best, Tyler, Bohn and Munro2007; Escudero, Reference Escudero, Boersma and Hamann2009; Flege, Reference Flege and Strange1995). Models of non-native speech perception assume that to create representations of non-native phonemes, the speech perception system needs to identify the invariant phonemic cues that differentiate these from native phonemes. These models implicitly assume that the mechanisms that enable L1 perception are the same that support the identification of the phonemic cues that distinguish native from non-native phonemes. We therefore build on native speech perception models to examine the potential mechanisms that drive the present relation between voice and L2 phoneme abilities.

Being models of native speech perception, speaker normalization theories do not address non-native phoneme learning. However, we provide a tentative explanation of how this theoretical proposal might accommodate the association between voice processes and L2 phoneme learning abilities. Speaker normalization theories propose that to account for the high variability of the speech signal, the perceptual system initially identifies and discards the voice information embedded in the speech signal. This computation entails that the remaining acoustic information cannot be attributed to speaker idiosyncrasies but rather corresponds to linguistic information (Choi et al., Reference Choi, Hu and Perrachione2018; Johnson & Sjerps, Reference Johnson, Sjerps, Pardo, Nygaard, Remez and Pisoni2021; Nusbaum & Magnuson, Reference Nusbaum, Magnuson, Johnson and Mullennix1997; Zhang & Chen, Reference Zhang and Chen2016). Viewed through the theoretical frame of speaker normalization theories, individual differences in voice processing abilities might be relevant during L2 phoneme learning as they would determine the listener’s accuracy in identifying the spectro-temporal correlates of voices in the speech signal, such as variations of the fundamental frequency or the frequency of the formants (Baumann & Belin, Reference Baumann and Belin2010; Ghazanfar & Rendall, Reference Ghazanfar and Rendall2008; Latinus & Belin, Reference Latinus and Belin2011). Inadequate identification of speaker-specific acoustic variation could lead to two potential scenarios: the speech system would either flag phoneme-relevant cues as voice-dependent and discard them from speech analyses or rather consider voice cues as phoneme-relevant features and include them in further speech processing. In both cases, inaccurate identification of voice features would hamper the discovery of the invariant cues of non-native phonemes and their subsequent learning. A caveat to this interpretation lies in the nature of the computations assessed in our voice processing ability tasks. The voice tasks focus on explicit recognition and discrimination and might involve high-level processes, such as accessing identity representations or making similarity judgments. These high-level processes may differ from those that underpin speaker normalization, which is typically conceptualized as an automatic process that mostly relies on low-level acoustic contrasts (Sjerps et al., Reference Sjerps, McQueen and Mitterer2013; Sjerps & Smiljanić, Reference Sjerps and Smiljanić2013).

An alternative theoretical proposal which also accommodates interactions between voice and linguistic information is provided by distributional and exemplar-based models of speech perception. These models suggest that, rather than a normalization process occurring, the speech perceptual system tracks and retains speaker-specific acoustic variations introduced into the speech signal (Goldinger, Reference Goldinger1998; Klatt, Reference Klatt1979; Kleinschmidt & Jaeger, Reference Kleinschmidt and Jaeger2015; McMurray & Jongman, Reference McMurray and Jongman2011; Sumner et al., Reference Sumner, Kim, King and McGowan2014). The flexibility that these models attribute to speech perception, conceptualizing it as a dynamic ability capable of incorporating novel information to adapt to new scenarios (e.g., learning dialectal variations), can arguably accommodate the learning of non-native phoneme contrasts. Based on phonetic training paradigms that show greater generalization when learning occurs in multispeaker as compared to single speaker conditions, it has been proposed that the speech system dynamically learns and extrapolates the features that characterize phonemes across speakers (Weatherholtz & Jaeger, Reference Weatherholtz, Jaeger, Weatherholtz and Jaeger2016). Building on distributional and exemplar-based models, the association between voice processing ability and L2 phoneme learning might originate from the listener’s ability to properly identify the speaker-specific variations introduced in the speech signal, directly impacting the ability of the listener to discover the acoustic correlates of phonemic regularities. The tasks employed in the present study to measure voice processing ability are designed to capture both low-level and high-level acoustic processes, similar to the processes conceptualized by distributional and exemplar-based models. It should be noted that these models propose that learning regular variations of voices is an implicit process, while the behavioral tasks employed in this study evaluated explicit learning and discrimination. However, recent research has shown that voice recognition accuracy is similar regardless of whether attention is directed to the voice or to the linguistic content of speech, suggesting that both implicit and explicit processes support the learning of the relevant cues that characterize voices (Lee & Perrachione, Reference Lee and Perrachione2022). While the assumptions of these models fit well with the reported findings, the validity of these theories of speech perception remains a subject of ongoing debate. Therefore, we are limited with regard to drawing a causal interpretation from the observed predictive value of voice processing ability for L2 phoneme learning. Investigating the neural underpinnings that support the interaction between voice processing ability and L2 phoneme learning may further our understanding of the relation between these two processes, especially upon considering that speaker recognition and speech perception engage partially distinct brain regions (Bonte et al., Reference Bonte, Hausfeld, Scharke, Valente and Formisano2014; Formisano et al., Reference Formisano, De Martino, Bonte and Goebel2008; Schall et al., Reference Schall, Kiebel, Maess and von Kriegstein2015). Previous studies have proposed two neurofunctional mechanisms that might support interactions between voice and speech processes: (i) interhemispheric functional connectivity between right lateralized voice-sensitive regions and left lateralized speech-sensitive regions (Deng et al., Reference Deng, Chandrasekaran, Wang and Wong2018; Kreitewolf et al., Reference Kreitewolf, Friederici and von Kriegstein2014; von Kriegstein et al., Reference von Kriegstein, Smith, Patterson, Kiebel and Griffiths2010) and (ii) the functional overlap exhibited by regions along the temporal cortices and right temporoparietal junction, which exhibit sensitivity to both voice and phonetic information (Chandrasekaran et al., Reference Chandrasekaran, Chan and Wong2011; Formisano et al., Reference Formisano, De Martino, Bonte and Goebel2008; Holmes & Johnsrude, Reference Holmes and Johnsrude2021; Luthra et al., Reference Luthra, Magnuson and Myers2023; Myers & Theodore, Reference Myers and Theodore2017; von Kriegstein et al., Reference von Kriegstein, Smith, Patterson, Kiebel and Griffiths2010). If these mechanisms are also engaged during L2 phoneme learning, they would provide a neural basis for the interaction between voice processing ability and L2 phoneme learning that would align with the proposals of the models of speech perception that we have discussed here.

Despite the present findings fitting well with theoretical proposals, it remains unknown whether the predictive value of voice processing abilities for L2 phoneme learning can be extrapolated to learning during other stages of life. The participants in this study were early bilingual adults who learnt the L2 upon commencing mandatory bilingual schooling at the age of 4 years. While children predominantly utilize implicit domain-specific mechanisms in language learning, adult L2 learners can no longer rely on these implicit mechanisms. Instead, they must reflect on the grammatical structure of the novel language and exploit general cognitive strategies (DeKeyser, Reference DeKeyser2000). Furthermore, recent studies support the long-standing proposal of the existence of a sensitive period for language learning (Hartshorne et al., Reference Hartshorne, Tenenbaum and Pinker2018; Werker & Hensch, Reference Werker and Hensch2015). Sensitive periods are developmental stages during which the central nervous system exhibits greater experience-induced plasticity, enabling the acquisition of sensory and cognitive abilities. Once a sensitive period has ended, poorer learning is possible in that domain. Crucially, the bilinguals tested in the present study learnt the L2 after the sensitive period for phoneme learning had concluded, which has been proposed to end during the second year of life (for a review, see Werker & Hensch, Reference Werker and Hensch2015). Indeed, several studies show that systematic exposure to an L2 at the age at which our sample of participants began learning does not consistently result in native-like proficiency in L2 phoneme contrast discrimination, as would be expected if the L2 had been acquired during the sensitive period (Caramazza et al., Reference Caramazza, Yeni-Komshian, Zurif and Carbone1973; Díaz et al., Reference Díaz, Mitterer, Broersma and Sebastian-Galles2012; Schmitz et al., Reference Schmitz, Díaz, Fernández Rubio and Sebastian-Galles2018; Sebastian-Galles & Díaz, Reference Sebastian-Galles and Díaz2012). Therefore, the observed association between voice processing and L2 phoneme learning may generalize to the learning of non-native phoneme contrasts occurring after the sensitive period for phoneme acquisition concludes. Supporting this claim, previous research has shown that voice processes are relevant for language learning during adulthood. For instance, numerous studies have demonstrated significant gains in the perception of L2 phoneme contrasts when learners are exposed to these contrasts from multiple speakers, as compared to learning from a single speaker (Bradlow et al., Reference Bradlow, Pisoni, Akahane-Yamada and Tohkura1997; Bradlow & Pisoni, Reference Bradlow and Pisoni1999; Deng et al., Reference Deng, Chandrasekaran, Wang and Wong2018; Iverson et al., Reference Iverson, Hazan and Bannister2005; Lively et al., Reference Lively, Logan and Pisoni1993, Reference Lively, Pisoni, Yamada, Tohkura and Yamada1994; Logan et al., Reference Logan, Lively and Pisoni1991; Wong, Reference Wong2014; Ylinen et al., Reference Ylinen, Uther, Latvala, Vepsäläinen, Iverson, Akahane-Yamada and Näätänen2010; for a review, see Zhang et al., Reference Zhang, Cheng and Zhang2021). This benefit in L2 phoneme learning in multispeaker contexts is believed to reflect the enhanced identification of the invariant cues that characterize phonemes when the learner has access to a more diverse speech input (Deng et al., Reference Deng, Chandrasekaran, Wang and Wong2018; Iverson et al., Reference Iverson, Hazan and Bannister2005; Ylinen et al., Reference Ylinen, Uther, Latvala, Vepsäläinen, Iverson, Akahane-Yamada and Näätänen2010). However, it remains to be investigated whether adult learners display variability in their ability to extract the features that characterize phonemes across speakers and whether this variability is related to individual differences in voice processing ability.

The assessment of voice abilities may be relevant to predict not only phoneme learning in the L2 but also the acquisition of the L1. Previous studies (Perea et al., Reference Perea, Jiménez, Suárez-Coalla, Fernández, Viña and Cuetos2014; Perrachione et al., Reference Perrachione, Del Tufo and Gabrieli2011) revealed an association between difficulties in voice recognition and dyslexia, a difficulty in learning to read whose origins are claimed to be rooted in a phonological deficit (Ramus, Reference Ramus2003). Impaired voice recognition abilities have been proposed as a marker of developmental dyslexia and a valuable measure to predict the disability (Perea et al., Reference Perea, Jiménez, Suárez-Coalla, Fernández, Viña and Cuetos2014). Moreover, an electrophysiological study reported a reduced encoding of features related to pitch in children with dyslexia compared to typically developing children (Chandrasekaran et al., Reference Chandrasekaran, Hornickel, Skoe, Nicol and Kraus2009). Chandrasekaran et al. (Reference Chandrasekaran, Hornickel, Skoe, Nicol and Kraus2009) suggested that individuals with dyslexia may experience challenges adapting speech processes to accommodate the characteristics of different voices. Considering voice processing as a general mechanism that enables the learning of speech sound invariants would provide an explanatory mechanism for the co-occurrence in dyslexia of voice and phoneme deficits. However, extrapolating an effect that influences L2 phoneme learning to the acquisition of the L1 would require further testing. The neural processes that enable language learning during the first years of life are different than those that enable learning after that sensitive period has concluded (Hartshorne et al., Reference Hartshorne, Tenenbaum and Pinker2018; Werker & Hensch, Reference Werker and Hensch2015). Furthermore, theoretical models of non-native speech perception conceptualize the acquisition of the L2 as qualitatively different from the learning of an L1, since L2 learners must identify the cues that differentiate non-native from the native phonemes (Best, Reference Best, Good man and Nusbaum1994; Best & Tyler, Reference Best, Tyler, Bohn and Munro2007; Escudero, Reference Escudero, Boersma and Hamann2009; Flege, Reference Flege and Strange1995). Thus, investigating whether voice processing ability influences L1 phoneme learning would also shed light on the similarities and differences between learning an L1 and an L2.

The discussed implications of the present findings for language learning call for further research to better comprehend the nature of the relationship between voice processing abilities and L2 phoneme learning. Future studies that investigate how voice processing ability influences language learning should note that our battery of behavioral tests captured large individual differences in L2 phoneme proficiency in both sub-lexical and lexical contexts, as reported in previous studies that investigated similar populations with the same L2 phoneme tasks (Díaz et al., Reference Díaz, Mitterer, Broersma and Sebastian-Galles2012; Schmitz et al., Reference Schmitz, Díaz, Fernández Rubio and Sebastian-Galles2018; Sebastian-Galles et al., Reference Sebastian-Galles, Echeverría and Bosch2005; Sebastian-Galles & Díaz, Reference Sebastian-Galles and Díaz2012). We were also successful in replicating the high inter-individual variability in the ability to recognize and discriminate speakers that previous studies observed in healthy populations (Aglieri et al., Reference Aglieri, Watson, Pernet, Latinus, Garrido and Belin2017; Lavan et al., Reference Lavan, Burston and Garrido2019a; Mühl et al., Reference Mühl, Sheil, Jarutytė and Bestelmeyer2018). While previous studies evaluated voice processing with speech samples containing phonetic information from the participants’ native language, we employed a diverse set of experimental procedures to evaluate voice abilities in the participants’ native language, in an unfamiliar language, and from sub-lexical affect bursts. We observed variability in all indicators of voice processing ability, regardless of the participants’ familiarity with the language employed during the voice tasks, whether the task trained participants to recognize the speaker or the linguistic content (sub-lexical or lexical) of the task. This suggests that, while they likely influence task performance, neither language familiarity, voice familiarity, nor linguistic content are critical factors when evaluating voice processing ability in healthy populations. However, we acknowledge that the accuracy data of the VDT did not relate to voice ability in the CFA. While all voice processing ability indicators captured individual differences, the VDT differed considerably from the other two voice tasks: It did not involve processing of linguistic information or training and employed affective interjections that primarily modulate the fundamental frequency of the speech signal (Bachorowski et al., Reference Bachorowski, Smoski and Owren2001; Bachorowski & Owren, Reference Bachorowski and Owren2001; Lavan et al., Reference Lavan, Scott and McGettigan2016, Reference Lavan, Burton, Scott and McGettigan2019b), unlike phoneme changes, which primarily encoded as changes in the F1 and F2 (Fox et al., Reference Fox, Flege and Munro1995; Yang & Fox, Reference Yang and Fox2014). These three differences could explain why the VDT task did not relate to voice ability in the CFA for the accuracy data. If future research supports the idea that linguistic content is not a crucial factor to capture individual differences in voice processing ability, it could lead to the development of a voice-processing evaluative tool applicable to any population, regardless of their linguistic background.

The combined use of CFAs and SEM revealed that the proficiency early L2 learners achieve in mastering L2 phoneme contrasts, an ability known to vary considerably among individuals (Archila-Suerte et al., Reference Archila-Suerte, Bunta and Hernandez2016; Díaz et al., Reference Díaz, Mitterer, Broersma and Sebastian-Galles2012; Schmitz et al., Reference Schmitz, Díaz, Fernández Rubio and Sebastian-Galles2018; Sebastian-Galles & Baus, Reference Sebastian-Galles, Baus and Cutler2005; Sebastian-Galles & Díaz, Reference Sebastian-Galles and Díaz2012), can be predicted based on an individual’s ability to recognize and discriminate voices. Our models showed this effect despite the tasks employed as indicators of voice processing ability involving learning and memory components not present in the indicators of L2 phoneme learning. In other words, as noted by a reviewer, had the tasks employed as indicators of each construct been more similar in their domain-general cognitive requirements, the predictive capacity of voice processing ability on L2 phoneme learning would likely have been greater than that reported here. Furthermore, voice and phoneme processing differ in the relative importance of various acoustic features of the speech signal. Research suggests that voice processing is primarily dependent on changes at high spectral modulations (i.e., >1.1 cycles per octave at center frequencies of up to 0.8 kHz), while phoneme category is mostly determined by changes in lower spectral modulations (i.e., broad spectral modulations for center frequencies above 0.6 kHz) and fast temporal changes (i.e., >7.8 Hz) (Rutten et al., Reference Rutten, Santoro, Hervais-Adelman, Formisano and Golestani2019). Therefore, the predictive capacity of voice processing ability over L2 phoneme learning is not due to both processes relying on the same acoustic features. However, this study does not establish a definitive causal relation between voice processing ability and L2 phoneme learning. While theoretical accounts of speech perception could support a causal relation, it remains feasible that the association between voice processing ability and L2 phoneme learning stems from a common origin: The listener’s sensitivity to detect phoneme changes in any given language. This interpretation was also presented in the study that inspired the current investigation (Díaz et al., Reference Díaz, Cordero, Hoogendoorn and Sebastian-Galles2022) and is based on two sets of findings: speaker recognition accuracy being influenced by the phoneme knowledge of the listener (Fecher & Johnson, Reference Fecher and Johnson2019, Reference Fecher and Johnson2022; Perrachione et al., Reference Perrachione, Del Tufo and Gabrieli2011) and the relation between the mastery of L2 phoneme contrasts with the ability to discriminate both native and unfamiliar phoneme contrasts (Díaz et al., Reference Díaz, Baus, Escera, Costa and Sebastian-Galles2008, Reference Díaz, Mitterer, Broersma, Escera and Sebastian-Galles2016). However, the alternative interpretation of voice processing ability and L2 phoneme learning emerging from a common underlying process lacked conclusive support from the single-latent CFAs. The analysis yielded good fit for the accuracy data but failed to adequately fit of the RT data. Advocating for the validity of the single-latent model would entail disregarding the RT data, a measure of effective cognitive processing equally valid, and complementary, to accuracy data (Ratcliff et al., Reference Ratcliff, Smith and McKoon2015a). Another potential limitation of the current study is the relatively small sample size. Some recommendations suggest employing sample sizes of up to several thousand individuals when conducting SEM (Kline, Reference Kline2015; Schumacker & Lomax, Reference Schumacker and Lomax2010). A sample size of such proportions was unfeasible due to the strict inclusion criteria participants had to meet. Nonetheless, a priori power analysis confirmed that our analyses were sufficiently powered, and, indeed, both the CFA and SEM exhibited good fit when including two latent variables. A second potential limitation related to the sample of this study is the greater number of women participants compared to men. However, no significant performance differences between males and females in the indicators of either latent variable were observed. This finding suggests that the higher proportion of female participants did not influence our primary findings.

In conclusion, our findings contribute to understanding the processes involved in speech perception and language learning: Individual differences in voice processing ability among early bilingual adults can predict the proficiency they achieve in L2 phoneme learning. By recognizing voice processing as a predictive factor in language learning, we deepen our understanding of the variability in L2 proficiency observed among early bilingual adults. This perspective opens new avenues for research, ranging from the acquisition of the native language to educational applications.

Supplementary material

To view supplementary material for this article, please visit http://doi.org/10.1017/S136672892400110X.

Data availability statement

The data collected and the analysis code are accessible at https://osf.io/symg2/?view_only=9f331bf8fc2146099874e81de4e908ae.

Acknowledgements

This work was supported by grants from the Ministry of Science and Innovation of the Spanish Government, State Research Agency and European Regional Development Fund (PID2019-106924GA-I00, PID2022-137368NB-I00 and PID2021‐123416NB-I00 funded by MICIN/AEI/10.13039/501100011033/FEDER UE) awarded to BD and NSG. MP was awarded a grant from the Valencian Government (CIAICO/2021/172). NSG was awarded the ICREA Academia Prize by the Catalan Government. GC was supported by a doctoral fellowship from the Universitat Internacional de Catalunya. Two grants financed by the Catalan Generalitat AGAUR (2021 SGR 00911 and 2021 SGR 00625) also supported this work.

Competing interests

None declared.

Appendices

Appendix 1

All abbreviations employed in the present study in order of appearance.

L2

Second language

F1

First formant

F2

Second formant

L1

Native language

SEM

Structural equation model

Lx

Unfamiliar language

RT

Reaction time

CFA

Confirmatory factor analysis

VRT

Voice recognition task

VDT

Voice discrimination task

CT

Categorization task

LDT

Lexical decision task

NWAT

Non-word association task

AIC

Akaike’s information criterion

χ 2

Chi-square test of model fit

CFI

Comparative fit index

SRMR

Standardized root mean residual

RMSEA

Root mean square error of approximation

D 2M

Mahalanobis distance

Appendix 2

Table A1. Descriptive statistics of the proportion of accurate responses delivered to all experimental tasks

n = 57 except for the lexical decision task in which n = 55.

Appendix 3

Table A2. Proportion of hits and false alarms for the VDT and LDT

For the LDT, hits and false alarms have been calculated separately for experimental and control trials. Standard deviations are presented in brackets.

Appendix 4

Table A3. Descriptive statistics and between-group comparisons as a function of sex for the indicators for voice processing ability and L2 phoneme learning

ms = milliseconds.

Footnotes

This article has earned badges for transparent research practices: Open Data and Open Materials. For details see the Data Availability Statement.

References

Aglieri, V., Watson, R., Pernet, C., Latinus, M., Garrido, L., & Belin, P. (2017). The Glasgow voice memory test: Assessing the ability to memorize and recognize unfamiliar voices. Behavior Research Methods, 49, 97110. https://doi.org/10.3758/s13428-015-0689-6CrossRefGoogle ScholarPubMed
Akaike, H. (1998). Information theory and an extension of the maximum likelihood principle. In Parzen, E., Tanabe, K., & Kitagawa, G. (Eds.), Selected papers of Hirotugu Akaike (pp. 199213). Springer New York. https://doi.org/10.1007/978-1-4612-1694-0_15CrossRefGoogle Scholar
Archila-Suerte, P., Bunta, F., & Hernandez, A. E. (2016). Speech sound learning depends on individuals’ ability, not just experience. International Journal of Bilingualism, 20, 231253. https://doi.org/10.1177/1367006914552206CrossRefGoogle Scholar
Bachorowski, J.-A., & Owren, M. J. (2001). Not all laughs are alike: Voiced but not unvoiced laughter readily elicits positive affect. Psychological Science, 12, 252257. https://doi.org/10.1111/1467-9280.00346CrossRefGoogle Scholar
Bachorowski, J.-A., Smoski, M. J., & Owren, M. J. (2001). The acoustic features of human laughter. The Journal of the Acoustical Society of America, 110, 15811597. https://doi.org/10.1121/1.1391244CrossRefGoogle ScholarPubMed
Baumann, O., & Belin, P. (2010). Perceptual scaling of voice identity: Common dimensions for different vowels and speakers. Psychological Research Psychologische Forschung, 74, 110120. https://doi.org/10.1007/s00426-008-0185-zCrossRefGoogle ScholarPubMed
Belin, P., Fillion-Bilodeau, S., & Gosselin, F. (2008). The Montreal affective voices: A validated set of nonverbal affect bursts for research on auditory affective processing. Behavior Research Methods, 40, 531539. https://doi.org/10.3758/BRM.40.2.531CrossRefGoogle ScholarPubMed
Bentler, P. M., & Chou, C.-P. (1987). Practical issues in structural modeling. Sociological Methods & Research, 16, 78117. https://doi.org/10.1177/0049124187016001004CrossRefGoogle Scholar
Best, C. T. (1994). The emergence of native-language phonological in fluences in infants: A perceptual assimilation model. In Good man, J. C., & Nusbaum, H. C. (Eds.), The development of speech perception: The transition from speech sounds to spoken words (pp. 167224). Cambridge: MIT Press.Google Scholar
Best, C. T., & Tyler, M. D. (2007). Nonnative and second-language speech perception: Commonalities and complementarities. In Bohn, O.-S. & Munro, M. J. (Eds.), Language learning & language teaching (Vol. 17, pp. 1334). John Benjamins Publishing Company. https://doi.org/10.1075/lllt.17.07besGoogle Scholar
Bonte, M., Hausfeld, L., Scharke, W., Valente, G., & Formisano, E. (2014). Task-dependent decoding of speaker and vowel identity from auditory cortical response patterns. The Journal of Neuroscience, 34, 45484557. https://doi.org/10.1523/JNEUROSCI.4339-13.2014CrossRefGoogle ScholarPubMed
Bosch, L., Costa, A., & Sebastian-Galles, N. (2000). First and second language vowel perception in early bilinguals. European Journal of Cognitive Psychology, 12, 189221. https://doi.org/10.1080/09541446.2000.10590222CrossRefGoogle Scholar
Bradlow, A. R., & Pisoni, D. B. (1999). Recognition of spoken words by native and non-native listeners: Talker-, listener-, and item-related factors. The Journal of the Acoustical Society of America, 106, 20742085. https://doi.org/10.1121/1.427952CrossRefGoogle ScholarPubMed
Bradlow, A. R., Pisoni, D. B., Akahane-Yamada, R., & Tohkura, Y. (1997). Training Japanese listeners to identify English / r / and / l /: IV. Some effects of perceptual learning on speech production. The Journal of the Acoustical Society of America, 101, 22992310. https://doi.org/10.1121/1.418276CrossRefGoogle Scholar
Brainard, D. H. (1997). The Psychophysics Toolbox. Spat Vis, 10, 433–6. PMID: 9176952.CrossRefGoogle ScholarPubMed
Brekelmans, G., Lavan, N., Saito, H., Clayards, M., & Wonnacott, E. (2022). Does high variability training improve the learning of non-native phoneme contrasts over low variability training? A replication. Journal of Memory and Language, 126, 104352. https://doi.org/10.1016/j.jml.2022.104352CrossRefGoogle Scholar
Brown, T. A. (2015). Confirmatory factor analysis for applied research. (2nd ed.). Guilford Publications.Google Scholar
Caramazza, A., Yeni-Komshian, G. H., Zurif, E. B., & Carbone, E. (1973). The acquisition of a new phonological contrast: The case of stop consonants in French-English bilinguals. The Journal of the Acoustical Society of America, 54, 421428. https://doi.org/10.1121/1.1913594CrossRefGoogle ScholarPubMed
Chandrasekaran, B., Chan, A. H. D., & Wong, P. C. M. (2011). Neural processing of what and who information in speech. Journal of Cognitive Neuroscience, 23, 26902700. https://doi.org/10.1162/jocn.2011.21631CrossRefGoogle ScholarPubMed
Chandrasekaran, B., Hornickel, J., Skoe, E., Nicol, T., & Kraus, N. (2009). Context-dependent encoding in the human auditory brainstem relates to hearing speech in noise: Implications for developmental dyslexia. Neuron, 64, 311319. https://doi.org/10.1016/j.neuron.2009.10.006CrossRefGoogle ScholarPubMed
Choi, J. Y., Hu, E. R., & Perrachione, T. K. (2018). Varying acoustic-phonemic ambiguity reveals that talker normalization is obligatory in speech processing. Attention, Perception, & Psychophysics, 80, 784797. https://doi.org/10.3758/s13414-017-1395-5CrossRefGoogle ScholarPubMed
Darwin, C. J., Denis McKeown, J., & Kirby, D. (1989). Perceptual compensation for transmission channel and speaker effects on vowel quality. Speech Communication, 8, 221234. https://doi.org/10.1016/0167-6393(89)90003-4CrossRefGoogle Scholar
DeKeyser, R. M. (2000). The robustness of critical period effects in second language acquisition. Studies in Second Language Acquisition, 22, 499533. https://doi.org/10.1017/S0272263100004022CrossRefGoogle Scholar
Deng, Z., Chandrasekaran, B., Wang, S., & Wong, P. C. M. (2018). Training-induced brain activation and functional connectivity differentiate multi-talker and single-talker speech training. Neurobiology of Learning and Memory, 151, 19. https://doi.org/10.1016/j.nlm.2018.03.009CrossRefGoogle ScholarPubMed
Díaz, B., Baus, C., Escera, C., Costa, A., & Sebastian-Galles, N. (2008). Brain potentials to native phoneme discrimination reveal the origin of individual differences in learning the sounds of a second language. Proceedings of the National Academy of Sciences, 105, 1608316088. https://doi.org/10.1073/pnas.0805022105CrossRefGoogle ScholarPubMed
Díaz, B., Cordero, G., Hoogendoorn, J., & Sebastian-Galles, N. (2022). Second-language phoneme learning positively relates to voice recognition abilities in the native language: Evidence from behavior and brain potentials. Frontiers in Psychology, 13, 1008963. https://doi.org/10.3389/fpsyg.2022.1008963CrossRefGoogle ScholarPubMed
Díaz, B., Mitterer, H., Broersma, M., Escera, C., & Sebastian-Galles, N. (2016). Variability in L2 phonemic learning originates from speech-specific capabilities: An MMN study on late bilinguals. Bilingualism: Language and Cognition, 19, 955970. https://doi.org/10.1017/S1366728915000450CrossRefGoogle Scholar
Díaz, B., Mitterer, H., Broersma, M., & Sebastian-Galles, N. (2012). Individual differences in late bilinguals’ L2 phonological processes: From acoustic-phonetic analysis to lexical access. Learning and Individual Differences, 22, 680689. https://doi.org/10.1016/j.lindif.2012.05.005CrossRefGoogle Scholar
Drozdova, P., van Hout, R., & Scharenborg, O. (2019). Talker-familiarity benefit in non-native recognition memory and word identification: The role of listening conditions and proficiency. Attention, Perception, & Psychophysics, 81, 16751697. https://doi.org/10.3758/s13414-018-01657-5CrossRefGoogle ScholarPubMed
Escudero, P. (2009). The linguistic perception of SIMILAR L2 sounds. In Boersma, P. & Hamann, S. (Eds.), Phonology in perception (pp. 151190). De Gruyter Mouton. https://doi.org/10.1515/9783110219234.151CrossRefGoogle Scholar
Fecher, N., & Johnson, E. K. (2019). Bilingual infants excel at foreign-language talker recognition. Developmental Science, 22, e12778. https://doi.org/10.1111/desc.12778CrossRefGoogle ScholarPubMed
Fecher, N., & Johnson, E. K. (2022). Revisiting the talker recognition advantage in bilingual infants. Journal of Experimental Child Psychology, 214, 105276. https://doi.org/10.1016/j.jecp.2021.105276CrossRefGoogle ScholarPubMed
Flege, J. E. (1995). Second language speech learning: Theory, findings and problems. In Strange, W. (Ed.), Speech perception and linguistic experience: Issues in cross-language research (pp. 233277). Baltimore: York Press.Google Scholar
Formisano, E., De Martino, F., Bonte, M., & Goebel, R. (2008). «Who» is saying «what»? Brain-based decoding of human voice and speech. Science, 322, 970973. https://doi.org/10.1126/science.1164318CrossRefGoogle Scholar
Fox, R. A., Flege, J. E., & Munro, M. J. (1995). The perception of English and Spanish vowels by native English and Spanish listeners: A multidimensional scaling analysis. The Journal of the Acoustical Society of America, 97, 25402551. https://doi.org/10.1121/1.411974CrossRefGoogle Scholar
Ghazanfar, A. A., & Rendall, D. (2008). Evolution of human vocal production. Current Biology, 18, R457R460. https://doi.org/10.1016/j.cub.2008.03.030CrossRefGoogle ScholarPubMed
Goldinger, S. D. (1998). Echoes of echoes? An episodic theory of lexical access. Psychological Review, 105, 251279. https://doi.org/10.1037/0033-295X.105.2.251CrossRefGoogle ScholarPubMed
Hancock, G. R., & Mueller, R. O. (Eds.). (2013). Structural equation modeling: A second course (2nd ed). Information Age Publishing, Inc.Google Scholar
Hartshorne, J. K., Tenenbaum, J. B., & Pinker, S. (2018). A critical period for second language acquisition: Evidence from 2/3 million English speakers. Cognition, 177, 263277. https://doi.org/10.1016/j.cognition.2018.04.007CrossRefGoogle ScholarPubMed
Holmes, E., & Johnsrude, I. S. (2021). Speech-evoked brain activity is more robust to competing speech when it is spoken by someone familiar. NeuroImage, 237, 118107. https://doi.org/10.1016/j.neuroimage.2021.118107CrossRefGoogle ScholarPubMed
Houston, D. M., & Jusczyk, P. W. (2000). The role of talker-specific information in word segmentation by infants. Journal of Experimental Psychology: Human Perception and Performance, 26, 15701582. https://doi.org/10.1037/0096-1523.26.5.1570Google ScholarPubMed
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6, 155. https://doi.org/10.1080/10705519909540118CrossRefGoogle Scholar
Humble, D., Schweinberger, S. R., Dobel, C., & Zäske, R. (2019). Voices to remember: Comparing neural signatures of intentional and non-intentional voice learning and recognition. Brain Research, 1711, 214225. https://doi.org/10.1016/j.brainres.2019.01.028CrossRefGoogle ScholarPubMed
Iverson, P., Hazan, V., & Bannister, K. (2005). Phonetic training with acoustic cue manipulations: A comparison of methods for teaching English /r/−/l/ to Japanese adults. The Journal of the Acoustical Society of America, 118, 32673278. https://doi.org/10.1121/1.2062307CrossRefGoogle Scholar
Johnson, K., & Sjerps, M. J. (2021). Speaker normalization in speech perception. In Pardo, J.S., Nygaard, L.C., Remez, R.E. and Pisoni, D.B. (Eds.), The Handbook of Speech Perception (pp. 145176). Wiley. https://doi.org/10.1002/9781119184096.ch6CrossRefGoogle Scholar
Johnsrude, I. S., Mackey, A., Hakyemez, H., Alexander, E., Trang, H. P., & Carlyon, R. P. (2013). Swinging at a cocktail party. Psychological Science, 24, 19952004. https://doi.org/10.1177/0956797613482467CrossRefGoogle Scholar
Kaganovich, N., Kim, J., Herring, C., Schumaker, J., MacPherson, M., & Weber-Fox, C. (2013). Musicians show general enhancement of complex sound encoding and better inhibition of irrelevant auditory change in music: An ERP study. European Journal of Neuroscience, 37, 12951307. https://doi.org/10.1111/ejn.12110CrossRefGoogle ScholarPubMed
Kenny, D. A., Kaniskan, B., & McCoach, D. B. (2015). The performance of RMSEA in models with small degrees of freedom. Sociological Methods & Research, 44(3), 486507. https://doi.org/10.1177/0049124114543236CrossRefGoogle Scholar
Kitamura, T., & Akagi, M. (1995). Speaker individualities in speech spectral envelopes. Journal of the Acoustical Society of Japan (E), 16, 283289. https://doi.org/10.1250/ast.16.283CrossRefGoogle Scholar
Klatt, D. H. (1979). Speech perception: A model of acoustic–phonetic analysis and lexical access. Journal of Phonetics, 7, 279312. https://doi.org/10.1016/S0095-4470(19)31059-9CrossRefGoogle Scholar
Kleinschmidt, D. F., & Jaeger, T. F. (2015). Robust speech perception: Recognize the familiar, generalize to the similar, and adapt to the novel. Psychological Review, 122, 148203. https://doi.org/10.1037/a0038695CrossRefGoogle Scholar
Kline, R. B. (2015). Principles and practice of structural equation modeling. (4th ed.). Guilford publications.Google Scholar
Kreitewolf, J., Friederici, A. D., & von Kriegstein, K. (2014). Hemispheric lateralization of linguistic prosody recognition in comparison to speech and speaker recognition. NeuroImage, 102, 332344. https://doi.org/10.1016/j.neuroimage.2014.07.038CrossRefGoogle ScholarPubMed
Krumbiegel, J., Ufer, C., & Blank, H. (2022). Influence of voice properties on vowel perception depends on speaker context. The Journal of the Acoustical Society of America, 152, 820834. https://doi.org/10.1121/10.0013363CrossRefGoogle ScholarPubMed
Ladefoged, P., & Broadbent, D. E. (1957). Information conveyed by vowels. The Journal of the Acoustical Society of America, 29, 98104. https://doi.org/10.1121/1.1908694CrossRefGoogle Scholar
Latinus, M., & Belin, P. (2011). Human voice perception. Current Biology, 21, R143R145. https://doi.org/10.1016/j.cub.2010.12.033CrossRefGoogle ScholarPubMed
Lavan, N., Burston, L. F. K., & Garrido, L. (2019a). How many voices did you hear? Natural variability disrupts identity perception from unfamiliar voices. British Journal of Psychology, 110, 576593. https://doi.org/10.1111/bjop.12348CrossRefGoogle ScholarPubMed
Lavan, N., Burton, A. M., Scott, S. K., & McGettigan, C. (2019b). Flexible voices: Identity perception from variable vocal signals. Psychonomic Bulletin & Review, 26, 90102. https://doi.org/10.3758/s13423-018-1497-7CrossRefGoogle ScholarPubMed
Lavan, N., Scott, S. K., & McGettigan, C. (2016). Laugh like you mean it: Authenticity modulates acoustic, physiological and perceptual properties of laughter. Journal of Nonverbal Behavior, 40, 133149. https://doi.org/10.1007/s10919-015-0222-8CrossRefGoogle Scholar
Lee, J. J., & Perrachione, T. K. (2022). Implicit and explicit learning in talker identification. Attention, Perception, & Psychophysics, 84, 20022015. https://doi.org/10.3758/s13414-022-02500-8CrossRefGoogle ScholarPubMed
Levi, S. V. (2019). Methodological considerations for interpreting the language familiarity effect in talker processing. Wiley Interdisciplinary Reviews: Cognitive Science, 10, 115. https://doi.org/10.1002/wcs.1483Google ScholarPubMed
Lively, S. E., Logan, J. S., & Pisoni, D. B. (1993). Training Japanese listeners to identify English /r/ and /l/. II: The role of phonetic environment and talker variability in learning new perceptual categories. The Journal of the Acoustical Society of America, 94, 12421255. https://doi.org/10.1121/1.408177CrossRefGoogle Scholar
Lively, S. E., Pisoni, D. B., Yamada, R. A., Tohkura, Y., & Yamada, T. (1994). Training Japanese listeners to identify English /r/ and /l/. III. Long‐term retention of new phonetic categories. The Journal of the Acoustical Society of America, 96, 20762087. https://doi.org/10.1121/1.410149CrossRefGoogle Scholar
Logan, J. S., Lively, S. E., & Pisoni, D. B. (1991). Training Japanese listeners to identify English /r/ and /l/: A first report. The Journal of the Acoustical Society of America, 89, 874886. https://doi.org/10.1121/1.410149CrossRefGoogle Scholar
Luthra, S., Magnuson, J. S., & Myers, E. B. (2023). Right posterior temporal cortex supports integration of phonetic and talker information. Neurobiology of Language, 4, 133. https://doi.org/10.1162/nol_a_00091CrossRefGoogle ScholarPubMed
Magnuson, J. S., Nusbaum, H. C., Akahane-Yamada, R., & Saltzman, D. (2021). Talker familiarity and the accommodation of talker variability. Attention, Perception, & Psychophysics, 83, 18421860. https://doi.org/10.3758/s13414-020-02203-yCrossRefGoogle ScholarPubMed
McMurray, B., & Jongman, A. (2011). What information is necessary for speech categorization? Harnessing variability in the speech signal by integrating cues computed relative to expectations. Psychological Review, 118, 219246. https://doi.org/10.1037/a0022325CrossRefGoogle ScholarPubMed
McNicol, D. (2005). A primer of signal detection theory. Psychology Press.CrossRefGoogle Scholar
Miller, J. L., Aibel, I. L., & Green, K. (1984). On the nature of rate-dependent processing during phonetic perception. Perception & Psychophysics, 35, 515. https://doi.org/10.3758/BF03205919CrossRefGoogle ScholarPubMed
Mühl, C., Sheil, O., Jarutytė, L., & Bestelmeyer, P. E. G. (2018). The Bangor voice matching test: A standardized test for the assessment of voice perception ability. Behavior Research Methods, 50, 21842192. https://doi.org/10.3758/s13428-017-0985-4CrossRefGoogle Scholar
Myers, E. B., & Theodore, R. M. (2017). Voice-sensitive brain networks encode talker-specific phonetic detail. Brain and Language, 165, 3344. https://doi.org/10.1016/j.bandl.2016.11.001CrossRefGoogle ScholarPubMed
Nearey, T. M. (1989). Static, dynamic, and relational properties in vowel perception. The Journal of the Acoustical Society of America, 85, 20882113. https://doi.org/10.1121/1.397861CrossRefGoogle ScholarPubMed
Newman, R. S., & Sawusch, J. R. (2009). Perceptual normalization for speaking rate III: Effects of the rate of one voice on perception of another. Journal of Phonetics, 37, 4665. https://doi.org/10.1016/j.wocn.2008.09.001CrossRefGoogle Scholar
Nusbaum, H. C., & Magnuson, J. S. (1997). Talker normalization: Phonetic constancy as a cognitive process. In Johnson, K. & Mullennix, J. W. (Eds.), Talker variability and speech processing (pp. 109132). Academic Press/Elsevier.Google Scholar
Nygaard, L. C., & Pisoni, D. B. (1998). Talker-specific learning in speech perception. Perception & Psychophysics, 60, 355376. https://doi.org/10.3758/BF03206860CrossRefGoogle ScholarPubMed
Nygaard, L. C., Sommers, M. S., & Pisoni, D. B. (1994). Speech perception as a talker-contingent process. Psychological Science, 5, 4246. https://doi.org/10.1111/j.1467-9280.1994.tb00612.xCrossRefGoogle ScholarPubMed
Nygaard, L. C., & Tzeng, C. Y. (2021). Perceptual integration of linguistic and non‐linguistic properties of speech. In Pardo, J.S., Nygaard, L.C., Remez, R.E. and Pisoni, D.B. (Eds.), The Handbook of Speech Perception (pp. 398427). Wiley. https://doi.org/10.1002/9781119184096.ch15CrossRefGoogle Scholar
Pallier, C., Bosch, L., & Sebastian-Galles, N. (1997). A limit on behavioral plasticity in speech perception. Cognition, 64, B9B17. https://doi.org/10.1016/S0010-0277(97)00030-9CrossRefGoogle ScholarPubMed
Pallier, C., Colomé, A., & Sebastian-Galles, N. (2001). The influence of native-language phonology on lexical access: Exemplar-based versus abstract lexical entries. Psychological Science, 12, 445449. https://doi.org/10.1111/1467-9280.00383CrossRefGoogle ScholarPubMed
Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437442. https://doi.org/10.1163/156856897X00366CrossRefGoogle ScholarPubMed
Perea, M., Jiménez, M., Suárez-Coalla, P., Fernández, N., Viña, C., & Cuetos, F. (2014). Ability for voice recognition is a marker for dyslexia in children. Experimental Psychology, 61, 480487. https://doi.org/10.1027/1618-3169/a000265CrossRefGoogle ScholarPubMed
Perrachione, T. K., Del Tufo, S. N., & Gabrieli, J. D. E. (2011). Human voice recognition depends on language ability. Science, 333, 595595. https://doi.org/10.1126/science.1207327CrossRefGoogle ScholarPubMed
Persson, A., & Jaeger, T. F. (2023). Evaluating normalization accounts against the dense vowel space of Central Swedish. Frontiers in Psychology, 14, 1165742. https://doi.org/10.3389/fpsyg.2023.1165742CrossRefGoogle ScholarPubMed
Peterson, G. E., & Barney, H. L. (1952). Control methods used in a study of the vowels. The Journal of the Acoustical Society of America, 24, 175184. https://doi.org/10.1121/1.1906875CrossRefGoogle Scholar
R Core Team. (2019). R: A language and environment for statistical computing [Software]. https://www.r-project.orgGoogle Scholar
Ramus, F. (2003). Developmental dyslexia: Specific phonological deficit or general sensorimotor dysfunction? Current Opinion in Neurobiology, 13, 212218. https://doi.org/10.1016/S0959-4388(03)00035-7CrossRefGoogle ScholarPubMed
Ratcliff, R., Smith, P. L., & McKoon, G. (2015a). Modeling regularities in response time and accuracy data with the diffusion model. Current Directions in Psychological Science, 24, 458470. https://doi.org/10.1177/0963721415596228CrossRefGoogle ScholarPubMed
Ratcliff, R., Thapar, A., & McKoon, G. (2010). Individual differences, aging, and IQ in two-choice tasks. Cognitive Psychology, 60, 127157. https://doi.org/10.1016/j.cogpsych.2009.09.001CrossRefGoogle ScholarPubMed
Ratcliff, R., Thompson, C. A., & McKoon, G. (2015b). Modeling individual differences in response time and accuracy in numeracy. Cognition, 137, 115136. https://doi.org/10.1016/j.cognition.2014.12.004CrossRefGoogle ScholarPubMed
Reinisch, E., & Sjerps, M. J. (2013). The uptake of spectral and temporal cues in vowel perception is rapidly influenced by context. Journal of Phonetics, 41, 101116. https://doi.org/10.1016/j.wocn.2013.01.002CrossRefGoogle Scholar
RStudio Team. (2020). RStudio: Integrated development for R. RStudio [Software]. http://www.rstudio.com/.Google Scholar
Rutten, S., Santoro, R., Hervais-Adelman, A., Formisano, E., & Golestani, N. (2019). Cortical encoding of speech enhances task-relevant acoustic information. Nature Human Behaviour, 3, 974987. https://doi.org/10.1038/s41562-019-0648-9CrossRefGoogle ScholarPubMed
Sanders, B. P. (1994). Andalusian vocalism and related processes [Doctoral dissertation, University of Illinois at Urbana-Champaign]. https://doi.org/10.1016/j.jaci.2012.05.050CrossRefGoogle Scholar
Schall, S., Kiebel, S. J., Maess, B., & von Kriegstein, K. (2015). Voice identity recognition: Functional division of the right STS and its behavioral relevance. Journal of Cognitive Neuroscience, 27, 280291. https://doi.org/10.1162/jocn_a_00707CrossRefGoogle ScholarPubMed
Schmitz, J., Díaz, B., Fernández Rubio, K., & Sebastian-Galles, N. (2018). Exploring the relationship between speech perception and production across phonological processes, language familiarity, and sensory modalities. Language, Cognition and Neuroscience, 33, 527546. https://doi.org/10.1080/23273798.2017.1390142CrossRefGoogle Scholar
Schumacker, R. E., & Lomax, R. G. (2010). A beginner’s guide to structural equation modeling (3rd ed). Routledge.Google Scholar
Schweinberger, S. R. (2001). Human brain potential correlates of voice priming and voice recognition. Neuropsychologia, 39, 921936. https://doi.org/10.1016/S0028-3932(01)00023-9CrossRefGoogle ScholarPubMed
Sebastian-Galles, N., & Baus, C. (2005). On the relationship between perception and production in L2 categories. In Cutler, A. (Ed.), Twenty-first century psycholinguistics: Four cornerstones (pp. 279292). Erlbaum.Google Scholar
Sebastian-Galles, N., & Díaz, B. (2012). First and second language speech perception: Graded learning. Language Learning, 62, 131147. https://doi.org/10.1111/j.1467-9922.2012.00709.xCrossRefGoogle Scholar
Sebastian-Galles, N., Echeverría, S., & Bosch, L. (2005). The influence of initial exposure on lexical representation: Comparing early and simultaneous bilinguals. Journal of Memory and Language, 52, 240255. https://doi.org/10.1016/j.jml.2004.11.001CrossRefGoogle Scholar
Sebastian-Galles, N., Rodríguez-Fornells, A., de Diego-Balaguer, R., & Díaz, B. (2006). First- and second-language phonological representations in the mental lexicon. Journal of Cognitive Neuroscience, 18, 12771291. https://doi.org/10.1162/jocn.2006.18.8.1277CrossRefGoogle ScholarPubMed
Sebastian-Galles, N., & Soto-Faraco, S. (1999). Online processing of native and non-native phonemic contrasts in early bilinguals. Cognition, 72, 111123. https://doi.org/10.1016/S0010-0277(99)00024-4CrossRefGoogle ScholarPubMed
Shi, D., DiStefano, C., Maydeu-Olivares, A., & Lee, T. (2022). Evaluating SEM model fit with small degrees of freedom. Multivariate Behavioral Research, 57, 179207. https://doi.org/10.1080/00273171.2020.1868965CrossRefGoogle ScholarPubMed
Sjerps, M. J., Fox, N. P., Johnson, K., & Chang, E. F. (2019). Speaker-normalized sound representations in the human auditory cortex. Nature Communications, 10, 2465. https://doi.org/10.1038/s41467-019-10365-zCrossRefGoogle ScholarPubMed
Sjerps, M. J., McQueen, J. M., & Mitterer, H. (2013). Evidence for precategorical extrinsic vowel normalization. Attention, Perception, & Psychophysics, 75, 576587. https://doi.org/10.3758/s13414-012-0408-7CrossRefGoogle ScholarPubMed
Sjerps, M. J., & Smiljanić, R. (2013). Compensation for vocal tract characteristics across native and non-native languages. Journal of Phonetics, 41, 145155. https://doi.org/10.1016/j.wocn.2013.01.005CrossRefGoogle Scholar
Snodgrass, J. G., & Corwin, J. (1988). Pragmatics of measuring recognition memory: Applications to dementia and amnesia. Journal of Experimental Psychology: General, 117(1), 3450. https://doi.org/10.1037/0096-3445.117.1.34CrossRefGoogle ScholarPubMed
Soper, D. S. (2023). A-priori sample size calculator for structural equation models [Software]. https://www.danielsoper.com/statcalcGoogle Scholar
Soriano, B. (2012). Andalusian vowel harmony and morphology-phonology interface. Anuario del Seminario de Filología Vasca «Julio de Urquijo», 46, 295307. https://doi.org/10.1387/asju.12625Google Scholar
Souza, P., Gehani, N., Wright, R., & McCloy, D. (2013). The advantage of knowing the talker. Journal of the American Academy of Audiology, 24, 689700. https://doi.org/10.3766/jaaa.24.8.6Google ScholarPubMed
Stanislaw, H., & Todorov, N. (1999). Calculation of signal detection theory measures. Behavior Research Methods, Instruments, & Computers, 31, 137149. https://doi.org/10.3758/BF03207704CrossRefGoogle ScholarPubMed
Sumner, M., Kim, S. K., King, E., & McGowan, K. B. (2014). The socially weighted encoding of spoken words: A dual-route approach to speech perception. Frontiers in Psychology, 4, 1015. https://doi.org/10.3389/fpsyg.2013.01015CrossRefGoogle ScholarPubMed
von Kriegstein, K., Smith, D. R. R., Patterson, R. D., Kiebel, S. J., & Griffiths, T. D. (2010). How the human brain recognizes speech in the context of changing speakers. The Journal of Neuroscience, 30, 629638. https://doi.org/10.1523/JNEUROSCI.2742-09.2010CrossRefGoogle ScholarPubMed
Weatherholtz, K., & Jaeger, T. F. (2016). Speech perception and generalization across talkers and accents. In Weatherholtz, K. & Jaeger, T. F., Oxford research encyclopedia of linguistics. Oxford University Press. https://doi.org/10.1093/acrefore/9780199384655.013.95Google Scholar
Werker, J. F., & Hensch, T. K. (2015). Critical periods in speech perception: New directions. Annual Review of Psychology, 66, 173196. https://doi.org/10.1146/annurev-psych-010814-015104CrossRefGoogle ScholarPubMed
Wong, J. W. S. (2014). The effects of high and low variability phonetic training on the perception and production of English vowels /e/-/æ/ by Cantonese ESL learners with high and low L2 proficiency levels. Interspeech 2014, 524528. https://doi.org/10.21437/Interspeech.2014-129CrossRefGoogle Scholar
Yang, J., & Fox, R. A. (2014). Perception of English vowels by bilingual Chinese–English and corresponding monolingual listeners. Language and Speech, 57, 215237. https://doi.org/10.1177/0023830913502774CrossRefGoogle Scholar
Ylinen, S., Uther, M., Latvala, A., Vepsäläinen, S., Iverson, P., Akahane-Yamada, R., & Näätänen, R. (2010). Training the brain to weight speech cues differently: A study of Finnish second-language users of English. Journal of Cognitive Neuroscience, 22, 13191332. https://doi.org/10.1162/jocn.2009.21272CrossRefGoogle ScholarPubMed
Yonan, C. A., & Sommers, M. S. (2000). The effects of talker familiarity on spoken word identification in younger and older listeners. Psychology and Aging, 15, 8899. https://doi.org/10.1037/0882-7974.15.1.88CrossRefGoogle ScholarPubMed
Yu, K., Zhou, Y., Zhang, L., Li, L., Li, P., & Wang, R. (2023). How different types of linguistic information impact voice perception: Evidence from the language-familiarity effect. Language and Speech, 66, 10071029. https://doi.org/10.1177/00238309221143062CrossRefGoogle ScholarPubMed
Zäske, R., Limbach, K., Schneider, D., Skuk, V. G., Dobel, C., Guntinas-Lichius, O., & Schweinberger, S. R. (2018). Electrophysiological correlates of voice memory for young and old speakers in young and old listeners. Neuropsychologia, 116, 215227. https://doi.org/10.1016/j.neuropsychologia.2017.08.011CrossRefGoogle Scholar
Zäske, R., Volberg, G., Kovacs, G., & Schweinberger, S. R. (2014). Electrophysiological correlates of voice learning and recognition. Journal of Neuroscience, 34, 1082110831. https://doi.org/10.1523/JNEUROSCI.0581-14.2014CrossRefGoogle ScholarPubMed
Zhang, C., & Chen, S. (2016). Toward an integrative model of talker normalization. Journal of Experimental Psychology: Human Perception and Performance, 42, 12521268. https://doi.org/10.1037/xhp0000216Google ScholarPubMed
Zhang, X., Cheng, B., & Zhang, Y. (2021). The role of talker variability in nonnative phonetic learning: A systematic review and meta-analysis. Journal of Speech, Language, and Hearing Research, 64, 48024825. https://doi.org/10.1044/2021_JSLHR-21-00181CrossRefGoogle ScholarPubMed
Figure 0

Figure 1. Accuracy scores of the indicators of voice processing ability (Spanish voice recognition, Chinese voice recognition, and voice discrimination) and L2 phoneme learning (categorization and lexical decision). Note that different accuracy transformed scores are depicted and direct visual comparison between the tasks is discouraged.

Figure 1

Figure 2. RT for the indicators for voice processing ability (Spanish voice recognition, Chinese voice recognition, and voice discrimination) and L2 phoneme learning (categorization and lexical decision).

Figure 2

Table 1. Descriptive statistics of the accuracy scores and reaction times of the indicators

Figure 3

Table 2. Covariance matrix of the accuracy score data

Figure 4

Table 3. Covariance matrix of the reaction time data (all indicators presented in ms).

Figure 5

Figure 3. Accuracy (3A) and RT (3B) CFAs with two correlated latent variables. Paths connecting the latent variables (circles) are the correlations between these constructs. The values between the latent variables and the manifest variables (squares) represent the standardized loadings of each task onto the latent variable. All loadings were significant at the p < .001 except for voice discrimination (VDT) to voice processing ability (p = .191) in the accuracy CFA. **p < .01, ***p < .001.

Figure 6

Table 4. Goodness-of-fit indices’ results of the CFAs

Figure 7

Figure 4. Accuracy (4A) and RT (4B) CFAs with a single latent variable. We present 4B for informational purposes only, since the RT data are misrepresented by this model (see Table 4). The values between the latent variable (circle) and the manifest variables (squares) represent the standardized loadings of each task onto the latent variable. All loadings were significant except for voice discrimination (VDT) in the accuracy CFA (p = .136). ** p < .01, *** p < .001.

Figure 8

Figure 5. Accuracy (5A) and RT (5B) SEMs showing the effect from the latent variable voice processing ability over L2 phoneme learning. The values between the latent variables (circles) and their respective manifest variables (squares) represent the standardized loadings of each task onto the corresponding latent variable. All loadings were significant, except for VDT to voice processing ability in 5A, which approached significance (p = .066). A dashed line represents non-significant results. * p < .05, ***p < .001.

Figure 9

Table 5. Goodness-of-fit indices’ results of the accuracy and RT SEMs

Figure 10

Table A1. Descriptive statistics of the proportion of accurate responses delivered to all experimental tasks

Figure 11

Table A2. Proportion of hits and false alarms for the VDT and LDT

Figure 12

Table A3. Descriptive statistics and between-group comparisons as a function of sex for the indicators for voice processing ability and L2 phoneme learning

Supplementary material: File

Cordero et al. supplementary material

Cordero et al. supplementary material
Download Cordero et al. supplementary material(File)
File 30.9 KB