Published online by Cambridge University Press: 16 July 2019
This study investigates how the fine-grained phonetic realization of tonal cues impacts speech segmentation when the cues signal the same word boundary in the native and unfamiliar languages but do so differently. Korean listeners use the phrase-final high (H) tone and the phrase-initial low (L) tone to segment speech into words (Kim, Broersma, & Cho, 2012; Kim & Cho, 2009), but it is unclear how the alignment of the phrase-final H tone and the scaling of the phrase-initial L tone modulate their speech segmentation. Korean listeners completed three artificial-language (AL) tasks (within-subject): (a) one AL without tonal cues; (b) one AL with later-aligned phrase-final H cues (non-Korean-like); and (c) one AL with earlier-aligned phrase-final H cues (Korean-like). Three groups of Korean listeners heard (b) and (c) in three phrase-initial L scaling conditions (between-subject): high (non-Korean-like), mid (non-Korean-like), or low (Korean-like). Korean listeners’ segmentation improved as the L tone was lowered, and (b) enhanced segmentation more than (c) in the high- and mid-scaling conditions. We propose that Korean listeners tune in to low-level cues (the greater H-to-L slope in [b]) that conform to the Korean intonational grammar when the phrase-initial L tone is not canonical phonologically.