Hostname: page-component-586b7cd67f-t8hqh Total loading time: 0 Render date: 2024-11-24T02:52:28.209Z Has data issue: false hasContentIssue false

The diagnostic utility of multiple-level likelihood ratios

Published online by Cambridge University Press:  01 September 2009

STEPHEN C. BOWDEN*
Affiliation:
Department of Psychology, The University of Melbourne, Victoria, Australia
DAVID W. LORING
Affiliation:
Department of Neurology, Emory University School of Medicine, Atlanta, Georgia
*
*Correspondence and reprint requests to: Stephen C. Bowden, Department of Psychology, University of Melbourne, Parkville, Victoria, Australia 3052. E-mail: [email protected]

Abstract

Clinicians are accustomed to interpreting diagnostic test scores in terms of sensitivity and specificity. Many clinicians also appreciate that sensitivity and specificity need to be interpreted in terms of local base rates (i.e., pretest probability). However, most neuropsychological tests contain a wide range of scores. Important diagnostic information may be sacrificed when valid test scores are reduced to the simple dichotomy of “positive” or “negative” diagnosis that underlies sensitivity and specificity analysis. The purpose of this study is to provide an introduction to multiple-level likelihood ratios, a method for preserving the information in a wider range of scores. These statistics are first described using a hypothetical example of dementia screening, then with patient data from an epilepsy surgery sample. Multiple-level likelihood ratios have several advantages over sensitivity and specificity analysis because they are applied across a wider range of diagnostic scores, and generalize to settings with different base rates. We suggest that the diagnostic validity of many psychological tests may be underestimated by relying solely on traditional dichotomous sensitivity and specificity analysis. (JINS, 2009, 15, 769–776.)

Type
Research Articles
Copyright
Copyright © The International Neuropsychological Society 2009

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

Agresti, A. (2002). Categorical data analysis (2nd ed.) New York: John Wiley & Sons, Inc.CrossRefGoogle Scholar
Baxendale, S.A., Thompson, P.J., & Duncan, J.S. (2008). Evidence-based practice. A reevaluation of the intracarotid amobarbital procedure (Wada Test). Archives of Neurology, 65, 841845.CrossRefGoogle ScholarPubMed
Baldessarini, R.J., Finkelstein, S., & Arana, G.W. (1983). The predictive power of diagnostic tests and the effect of prevalence of illness. Archives of General Psychiatry, 40, 569573.CrossRefGoogle ScholarPubMed
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Earlbaum Associates.Google Scholar
Faust, D. (2003). Holistic thinking is not the whole story. Assessment, 10, 428441.CrossRefGoogle Scholar
Frederick, R., & Bowden, S.C. (2008). The test validation summary. Assessment, [Epub ahead of print].Google ScholarPubMed
Grimes, D.A., & Schulz, K.F. (2005). Refining clinical diagnosis with likelihood ratios. Lancet, 365, 15001505.CrossRefGoogle ScholarPubMed
Hosmer, D.W., & Lemeshow, S. (2000). Applied logistic regression (2nd ed.). New York: John Wiley & Sons, Inc.CrossRefGoogle Scholar
Ivnik, R.J., Smith, G.E., Cerhan, J.H., Boeve, B.F., Tangalos, E.G., & Petersen, R.C. (2001). Understanding the diagnostic capabilities of cognitive tests. Clinical Neuropsychologist, 15, 114124.CrossRefGoogle ScholarPubMed
Loring, D.W., Bowden, S.C., Lee, G.P., & Meador, K.J. (in press). Diagnostic utility of Wada memory asymmetries: Sensitivity, specificity, and likelihood ratio characterization. Neuropsychology.Google Scholar
Loring, D.W., Meador, K.J., Lee, G.P., & King, D.W. (1992). Amobarbital effects and lateralized brain function: The Wada test. New York: Springer-Verlag.CrossRefGoogle Scholar
Loring, D.W., Straus, E., Hermann, B.P., Barr, W.B., Perrinne, K., Trenerry, M.R., et al. . (2008). Differential neuropsychological test sensitivity to left temporal lobe epilepsy. Journal of the International Neuropsychological Society, 14, 17.CrossRefGoogle ScholarPubMed
MedCalc. (2009). MedCalc for Windows, version 10.0.2. Mariakerke, Belgium: MedCalc Software.Google Scholar
Meehl, P.E., & Rosen, A. (1955). Antecedent probability and the efficiency of psychometric signs, patterns, or cutting scores. Psychological Bulletin, 52, 194216.CrossRefGoogle ScholarPubMed
Paolicchi, J.M. (2008). Is the Wada still relevant? Yes. Archives of Neurology, 65, 838840.CrossRefGoogle ScholarPubMed
SPSS. SPSS 17.0 for Windows. (2009). Chicago: SPSS Incorporated.Google Scholar
Straus, S.E., Richardson, W.S., Glasziou, P., & Haynes, R.B. (2005). Evidence-based medicine: How to practice and teach EBAM (3rd ed.). Edinburgh: Elsevier Churchill-Livingstone.Google Scholar
Swets, J.A. (1992). The science of choosing the right decision threshold in high stakes diagnostics. American Psychologist, 47, 522532.CrossRefGoogle ScholarPubMed
Wilkinson, L., & APA Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594604.CrossRefGoogle Scholar
Woods, S.P., Weinborn, M., & Lovejoy, D.W. (2003). Are classification accuracy statistics underused in neuropsychological research? Journal of Clinical and Experimental Neuropsychology, 25, 431439.CrossRefGoogle ScholarPubMed
Zakzanis, K.K. (2001). Statistics to tell the truth, the whole truth, and nothing but the truth: Formulae, illustrative numerical examples, and heuristic interpretation of effect size analyses for neuropsychological researchers. Archives of Clinical Neuropsychology, 16, 653667.CrossRefGoogle ScholarPubMed
Zweig, M.H., & Campbell, G. (1993). Receiver-Operating Characteristic (ROC) plots: A fundamental evaluation tool in clinical medicine. Clinical Chemistry, 39, 561577.CrossRefGoogle ScholarPubMed