Published online by Cambridge University Press: 11 October 2017
Since its introduction by Glass in the 1970s, meta-analysis has become a widely accepted and the most preferred approach to conducting research synthesis. Overcoming the weaknesses commonly associated with traditional narrative review and vote counting, meta-analysis is a statistical method of systematically aggregating and analyzing empirical studies by following well-established procedures. The findings of a meta-analysis, when appropriately conducted, are able to inform important policy decisions and provide practical pedagogical suggestions. With the growing number of publications employing meta-analysis across a wide variety of disciplines, it has received criticism due to its inconsistent findings derived from multiple meta-analyses in the same research domain. These inconsistencies have arisen partly due to the alternatives available to meta-analysts in each major meta-analytic procedure. Researchers have therefore recommended transparent reporting on the decision-making for every essential judgment call so that the results across multiple meta-analyses become replicable, consistent, and interpretable. This research explored the degree to which meta-analyses in the computer-assisted language learning (CALL) discipline transparently reported their decisions in every critical step. To achieve this aim, we retrieved 15 eligible meta-analyses in CALL published between 2003 and 2015. Features of these meta-analyses were extracted based on a codebook modified from Cooper (2003) and Aytug, Rothstein, Zhou and Kern (2012). A transparency score of reporting was then calculated to examine the degree to which these meta-analyses are compliant with the norms of reporting as recommended in the literature. We then discuss the strengths and weaknesses of the methodologies and provide suggestions for conducting quality meta-analyses in this domain.