Article contents
Generating example contexts to help children learn word meaning†
Published online by Cambridge University Press: 12 January 2012
Abstract
This article addresses the problem of generating good example contexts to help children learn vocabulary. We describe VEGEMATIC, a system that constructs such contexts by concatenating overlapping five-grams from Google's N-gram corpus. We propose and operationalize a set of constraints to identify good contexts. VEGEMATIC uses these constraints to filter, cluster, score, and select example contexts. An evaluation experiment compared the resulting contexts against human-authored example contexts (e.g., from children's dictionaries and children's stories). Based on rating by an expert blind to source, their average quality was comparable to story sentences, though not as good as dictionary examples. A second experiment measured the percentage of generated contexts rated by lay judges as acceptable, and how long it took to rate them. They accepted only 28% of the examples, but averaged only 27 seconds to find the first acceptable example for each target word. This result suggests that hand-vetting VEGEMATIC's output may supply example contexts faster than creating them manually.
- Type
- Articles
- Information
- Copyright
- Copyright © Cambridge University Press 2012
Footnotes
This work, performed while the first author was a Master's student in the Language Technologies Institute at Carnegie Mellon University, was supported by the Institute of Education Sciences, US Department of Education, through Grant R305A080157 to Carnegie Mellon University. The opinions expressed are those of the authors and do not necessarily represent the views of the Institute or the US Department of Education. We thank Dr. Margaret McKeown for her expertise and assistance, and our lay judges for their participation.
References
- 1
- Cited by