Hostname: page-component-cd9895bd7-q99xh Total loading time: 0 Render date: 2024-12-18T20:50:42.307Z Has data issue: false hasContentIssue false

Quantitative Data Analysis for Single-Case Methods, Between-Groups Designs, and Instrument Development

Published online by Cambridge University Press:  28 February 2018

Robyn Tate
Affiliation:
Guest Editor, Brain Impairment, John Walsh Centre for Rehabilitation Research, Kolling Institute of Medical Research, The University of Sydney, Sydney, Australia
Michael Perdices
Affiliation:
Guest Editor, Brain Impairment, Department of Neurology, Royal North Shore Hospital, Sydney, Australia

Extract

We are pleased to bring you this special issue of Brain Impairment on quantitative data analysis, an area of increasing complexity and sophistication. In planning the special issue, our intention was to bring together a set of articles covering diverse and topical areas in the field, with the idea of having the volume serve as a ‘go-to’ resource. The special issue is aimed at researchers, clinicians engaged in research, and advanced students all of whom may have passing familiarity with a particular data analytic technique, but wish to know more about it and how to apply it. Accordingly, our aim is to equip the reader with concrete, hands-on information that can be applied in the day-to-day world of research. The authors of the articles comprising the special issue, each of whom is an expert in his/her field, were charged with the task of writing a practical guide and providing worked examples to illustrate the application of their selected technique/s. The papers in the special issue cover three domains: the single-case method, between-groups design, and psychometric aspects of instrument development.

Type
Editorial
Copyright
Copyright © Australasian Society for the Study of Brain Impairment 2018 

We are pleased to bring you this special issue of Brain Impairment on quantitative data analysis, an area of increasing complexity and sophistication. In planning the special issue, our intention was to bring together a set of articles covering diverse and topical areas in the field, with the idea of having the volume serve as a ‘go-to’ resource. The special issue is aimed at researchers, clinicians engaged in research, and advanced students all of whom may have passing familiarity with a particular data analytic technique, but wish to know more about it and how to apply it. Accordingly, our aim is to equip the reader with concrete, hands-on information that can be applied in the day-to-day world of research. The authors of the articles comprising the special issue, each of whom is an expert in his/her field, were charged with the task of writing a practical guide and providing worked examples to illustrate the application of their selected technique/s. The papers in the special issue cover three domains: the single-case method, between-groups design, and psychometric aspects of instrument development.

Single-case research is increasingly used in the neurorehabilitation field. Perusal of evidence databases such as PsycBITE (www.psycbite.com) demonstrate the exponential growth of publications over the past 40 years, numbering almost 1500 single-case intervention studies in field of acquired brain impairment alone. There is increasing recognition of the importance of scientific rigor in single-case experimental designs (SCED; e.g., Kratochwill et al., Reference Kratochwill, Hitchcock, Horner, Levin, Odom, Rindskopf and Shadish2013; Tate et al., Reference Tate, Perdices, Rosenkoetter, Wakim, Godbee, Togher and McDonald2013), part and parcel of which is the critical role of data evaluation. Three of the papers in the special issue describe various approaches to data evaluation. Traditionally, SCEDs have focused on the visual analysis of graphed data, the argument being that if you cannot see a treatment effect, then it is likely not to be very important. In their paper on systematic use of visual analysis, Ledford, Lane and Severini (Reference Ledford, Lane and Severini2018) provide a heuristic tutorial on the steps that a researcher should cover to comprehensively conduct a visual analysis, in terms of examining level, trend, variability, consistency, overlap, and immediacy within and/or between phases.

Following on, Manolov and Solanis (Reference Manolov and Solanis2018) delineate a variety of descriptive and inferential techniques available for SCEDs. In so doing, they provide an integrative approach between the visual versus statistical camps, noting that they themselves ‘rely heavily on visual representation of the data to enhance the interpretation of the numerical results’. Analytic techniques in this area are rapidly evolving, but with the welcome increase comes the challenge of selecting the technique that is most suitable for the data set. The authors re-analyse previously published data to illustrate the application of different statistical techniques, along with the rationale for using that technique. Among the helpful directions provided in the paper is knowing that (a) like between-groups analysis there is no single analytic technique that can be regarded as the gold standard technique, but (b) unlike between-groups analyses it is not advisable to determine the analytic technique a priori; rather the data need to be inspected for trend, variability, and other features to determine a suitable technique that will not produce misleading results. Readers will appreciate the direction to websites where the intricacies of complicated procedures advocated by the authors, such as piecewise regression, can be conducted without angst.

Onghena, Michiela, Jamshidi, Moeyaert and van der Noortgate provide introduction to and step-by-step demonstration of the application of statistical techniques with both the unilevel model (evaluating level, trend, and serial dependency), and their cutting-edge work on multilevel meta-analytical procedures, as well as alternative approaches (e.g., use of randomisation tests). Serendipitously, the authors use one of the published data sets used in the previous paper to illustrate the application of increasingly sophisticated regression-based models. In a fitting conclusion to the first section of this special issue, Onghena and colleagues (Reference Onghena, Michiela, Jamshidi, Moeyaert and van der Noortgate2018) made thoughtful suggestions for furthering work in the field of single-case methods in general and single-case data evaluation in particular.

Researchers are generally more familiar with data analysis of the traditional between-groups design, covered by three articles in Section 2 of the special issue. Here, we have endeavoured to present papers that provide novel perspectives on familiar themes, which both the newcomer to the field, as well as the seasoned researcher will appreciate. Everyone will want to know about the 50 tips for randomized controlled trials (RCT) from Harvey, Glinsky and Herbert (Reference Harvey, Glinsky and Herbert2018). The authors provide pragmatic step-by-step guidelines to help researchers avoid the many pitfalls that can befall the design and conduct of clinical trials. Their tips and advice are sage, honed from their extensive experience in conducting clinical trials. And the breadth of coverage is complete, going beyond ‘standard’ methodological and theoretical issues. For example, the item entitled ‘Try not to ask for too much from participants’ cautions the investigator not make the burden of participation in the trial too onerous and thus risk losing participants, hence potentially compromising the trial results. Eminently sensible advice – not usually found in text books.

The theme of points 41–43 from Harvey and colleagues (viz., don't be misled by p-values, estimate the size of the effect of the intervention, and consider how much uncertainty there is in your estimate of the effect of the intervention, respectively), is further developed in the article by Perdices (Reference Perdices2018). The paper reviews misconceptions regarding null hypothesis significance testing that have been entrenched for many decades in psychological and behavioural research. Null hypothesis testing does not really deliver what many researchers think it does, and p-values do not have the significance generally attributed to them. The American Psychological Association recommendations for the use of effect sizes and confidence intervals made more than two decades ago are still not universally implemented. The paper presents a brief guide to commonly used effect sizes and worked out examples on how to calculate them. References to on-line calculators for both effect sizes and confidence intervals provide added value.

Systematic reviews and meta-analysis provide Level 1 evidence and hence are a valued resource in bibliographic databases. Yet, like the RCT and the SCED, the scientific quality of systematic reviews varies enormously. All of these methodologies have critical appraisal tools that assist the reader to identify sound research with minimal bias and credible results, for example, the PEDro scale for RCTs (Maher, Sherrington, Herbert, Moseley, & Elkins, Reference Maher, Sherrington, Herbert, Moseley and Elkins2003), Risk of Bias in N-of-1 Trials (RoBiNT) scale for SCEDs (Tate et al., Reference Tate, Perdices, Rosenkoetter, Wakim, Godbee, Togher and McDonald2013), and A MeaSurement Tool to Assess systematic Reviews (AMSTAR) for systematic reviews (Shea et al., Reference Shea, Barnaby, Wells, Thurku, Hamel, Moran and Kristjansson2017). The most influential repository of systematic reviews in the health field is the Cochrane Database of Systematic Reviews. The article by Gertler and Cameron (Reference Gertler and Cameron2018) demonstrates the stages involved in conducting a Cochrane systematic review, focusing on data analysis techniques. If you want to know about assessing heterogeneity, understanding forest plots depicting results of meta-analyses, funnel plots to detect bias, GRADE analyses to take account of risk of bias, and other tantalising techniques, then this is a paper for you!

The third section of the special issue contains two papers addressing aspects of instrument development at the psychometric level. Approaches to instrument development and validation in the health field have taken a quantum leap in recent decades and item response theory (IRT), as a mathematical extension of classical test theory, is increasingly used in instrument development and evaluation. As Kean, Bisson, Brodke, Biber and Gross (Reference Kean, Bisson, Brodke, Biber and Gross2018) point out in their paper, although the origins of the mathematical processes of IRT can be traced back to the work of Thurstone almost a century ago, its application in the health sciences is more recent. We can expect to see more studies using IRT because of its precision of measurement. The authors’ paper on IRT takes the reader through the why, what, when, and how of IRT and Rasch analysis.

In the final paper, Rosenkoetter and Tate (Reference Rosenkoetter and Tate2018) address evaluation of the scientific quality of psychometric studies. No longer is it sufficient to report high reliability and validity coefficients – rather, the method by which such results are obtained is also of critical importance. They note that ‘the results of a study are trustworthy if the study design and methodology are sound. If they are not, the trustworthiness of the findings remains unknown’. The authors provide a head-to-head comparison of six instruments specifically developed to critically appraise psychometric studies in the behavioural sciences. The paper concludes with an application of the COSMIN checklist, along with the Terwee statistical quality criteria, and a level of evidence synthesis.

We thank the authors who contributed to this special issue of Brain Impairment. Each of the articles has been carefully constructed to fulfil our brief and each also makes a unique, timely, and erudite contribution to the field. Consequently, we believe that this volume will be a valuable resource and hold something new for every researcher, clinician, and advanced student.

References

Gertler, P., & Cameron, I.D. (2018). Making sense of data analytic techniques used in a Cochrane Systematic Review. Brain Impairment, 19 (1).Google Scholar
Harvey, L.A., Glinsky, J.V., & Herbert, R.D. (2018). 50 tips for clinical trialists. Brain Impairment, 19 (1).Google Scholar
Kean, J., Bisson, E.F., Brodke, D.S., Biber, J., & Gross, P.H. (2018). An introduction to item response theory and Rasch analysis: Application using the eating assessment tool (EAT-10). Brain Impairment, 19 (1).Google Scholar
Kratochwill, T.R., Hitchcock, J., Horner, R.H., Levin, J.R., Odom, S.L., Rindskopf, D.M., & Shadish, W.R. (2013). Single-case intervention research design standards. Remedial and Special Education, 34 (1), 2638.Google Scholar
Ledford, J.R., Lane, J.D., & Severini, K.E. (2018). Systematic use of visual analysis for assessing outcomes in single case design studies. Brain Impairment, 19 (1).Google Scholar
Maher, C.G., Sherrington, C., Herbert, R.D., Moseley, A.M., & Elkins, M. (2003). Reliability of the PEDro scale for rating quality of RCTs. Physical Therapy, 83, 713721.Google Scholar
Manolov, R., & Solanis, A. (2018). Analytic options for single-case experimental designs: Review and application to brain impairment. Brain Impairment, 19 (1).CrossRefGoogle Scholar
Onghena, P., Michiela, B., Jamshidi, L., Moeyaert, M., & van der Noortgate, W. (2018). One by one: Accumulating evidence by using meta-analytical procedures for single-case experiments. Brain Impairment, 19 (1).Google Scholar
Perdices, M. (2018). Null hypothesis significance testing, p-values, effect sizes and confidence intervals. Brain Impairment, 19 (1).Google Scholar
Rosenkoetter, U., & Tate, R.L. (2018). Assessing features of psychometric assessment instruments: A comparison of the COSMIN checklist with other critical appraisal tools. Brain Impairment, 19 (1).Google Scholar
Shea, B.J., Barnaby, C.R., Wells, G., Thurku, M., Hamel, C., Moran, J., . . . Kristjansson, E. (2017). AMSTAR 2: A critical appraisal tool for systematic review that include randomised or non-randomised studies of healthcare interventions, or both. BMJ, 358, j4008.Google Scholar
Tate, R.L., Perdices, M., Rosenkoetter, U., Wakim, D., Godbee, K., Togher, L., & McDonald, S. (2013). Revision of a method quality rating scale for single-case experimental designs and n-of-1 trials: The 15-item risk of bias in N-of-1 trials (RoBiNT) scale. Neuropsychological Rehabilitation, 23 (5), 619638.Google Scholar