Article contents
ESTABLISHING EVIDENCE OF LEARNING IN EXPERIMENTS EMPLOYING ARTIFICIAL LINGUISTIC SYSTEMS
Published online by Cambridge University Press: 09 February 2017
Abstract
Artificial linguistic systems (ALSs) offer many potential benefits for second language acquisition (SLA) research. Nonetheless, their use in experiments with posttest-only designs can give rise to internal validity problems depending on the baseline that is employed to establish evidence of learning. Researchers in this area often compare experimental groups’ performance against (a) statistical chance, (b) untrained control groups’ performance, and/or (c) trained control groups’ performance. However, each of these methods can involve unwarranted tacit assumptions, limitations, and challenges from a variety of sources (e.g., preexisting perceptual biases, participants’ fabrication of rules, knowledge gained during the test), any of which might produce systematic response patterns that overlap with the linguistic target even in the absence of learning during training. After illustrating these challenges, we offer some brief recommendations regarding how triangulation and more sophisticated statistical approaches may help researchers to draw more appropriate conclusions going forward.
- Type
- Research Article
- Information
- Copyright
- Copyright © Cambridge University Press 2017
Footnotes
We would like to thank Ronald Leow, Alison Mackey, Nick B. Pandža, Kelli Ryan, and the reviewers for their helpful comments on earlier versions of this paper. We are also grateful to Patrick Rebuschat for his valuable guidance on our early forays into implicit learning research, and for collaborations with him that have positively influenced our work in numerous productive ways. All remaining errors are our own. Both authors contributed equally to this paper.
References
REFERENCES
- 17
- Cited by