Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-25T05:31:33.810Z Has data issue: false hasContentIssue false

CVLT-II Forced Choice Recognition Trial as an Embedded Validity Indicator: A Systematic Review of the Evidence

Published online by Cambridge University Press:  13 September 2016

Eben S. Schwartz*
Affiliation:
Neuroscience Center, Waukesha Memorial Hospital, Waukesha, Wisconsin
Laszlo Erdodi
Affiliation:
Department of Psychology, University of Windsor, Windsor ON, Canada
Nicholas Rodriguez
Affiliation:
Department of Psychology, University of Windsor, Windsor ON, Canada
Jyotsna J. Ghosh
Affiliation:
Neuropsychology Program, Geisel School of Medicine at Dartmouth, Lebanon, New Hampshire
Joshua R. Curtain
Affiliation:
Department of Neurology and Neurological Sciences, Stanford Health Care, Stanford, California
Laura A. Flashman
Affiliation:
Neuropsychology Program, Geisel School of Medicine at Dartmouth, Lebanon, New Hampshire
Robert M. Roth
Affiliation:
Neuropsychology Program, Geisel School of Medicine at Dartmouth, Lebanon, New Hampshire
*
Correspondence and reprint requests to: Eben S. Schwartz, Department of Neuroscience, ProHealth Waukesha Memorial Hospital, 721 American Avenue; Suite 406, Waukesha, WI 53188. E-mail: [email protected]

Abstract

Objectives: The Forced Choice Recognition (FCR) trial of the California Verbal Learning Test, 2nd edition, was designed as an embedded performance validity test (PVT). To our knowledge, this is the first systematic review of classification accuracy against reference PVTs. Methods: Results from peer-reviewed studies with FCR data published since 2002 encompassing a variety of clinical, research, and forensic samples were summarized, including 37 studies with FCR failure rates (N=7575) and 17 with concordance rates with established PVTs (N=4432). Results: All healthy controls scored >14 on FCR. On average, 16.9% of the entire sample scored ≤14, while 25.9% failed reference PVTs. Presence or absence of external incentives to appear impaired (as identified by researchers) resulted in different failure rates (13.6% vs. 3.5%), as did failing or passing reference PVTs (49.0% vs. 6.4%). FCR ≤14 produced an overall classification accuracy of 72%, demonstrating higher specificity (.93) than sensitivity (.50) to invalid performance. Failure rates increased with the severity of cognitive impairment. Conclusions: In the absence of serious neurocognitive disorder, FCR ≤14 is highly specific, but only moderately sensitive to invalid responding. Passing FCR does not rule out a non-credible presentation, but failing FCR rules it in with high accuracy. The heterogeneity in sample characteristics and reference PVTs, as well as the quality of the criterion measure across studies, is a major limitation of this review and the basic methodology of PVT research in general. (JINS, 2016, 22, 851–858)

Type
Brief Communications
Copyright
Copyright © The International Neuropsychological Society 2016 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

Axelrod, B.N., & Schutte, C. (2010). Analysis of the dementia profile on the Medical Symptom Validity Test. The Clinical Neuropsychologist, 24, 873881.CrossRefGoogle ScholarPubMed
Axelrod, B.N., & Schutte, C. (2011). Concurrent validity of three forced-choice measures of symptom validity. Applied Neuropsychology, 18(1), 2733.CrossRefGoogle ScholarPubMed
Baldo, J.V., Delis, D., Kramer, J., & Shimamura, A. (2002). Memory performance on the California Verbal Learning Test-II: Findings from patients with focal frontal lesions. Journal of the International Neuropsychological Society, 8, 539546.CrossRefGoogle ScholarPubMed
Bigler, E.D. (2012). Symptom validity testing, effort and neuropsychological assessment. Journal of the International Neuropsychological Society, 18, 632642.CrossRefGoogle ScholarPubMed
Binder, L.M., Spector, J., & Youngjohn, J.R. (2012). Psychogenic stuttering and other acquired nonorganic speech and language abnormalities. Archives of Clinical Neuropsychology, 27, 557568.CrossRefGoogle ScholarPubMed
Boone, K.B. (2013). Clinical practice of forensic neuropsychology. New York, NY: Guilford.Google Scholar
Clark, A.L., Amick, M.M., Fortier, C., Milberg, W.P., & McGlinchey, R.E. (2014). Poor performance validity predicts clinical characteristics and cognitive test performance of OEF/OIF/OND veterans in a research setting. The Clinical Neuropsychologist, 28(5), 802825.CrossRefGoogle Scholar
Clark, L.R., Stricker, N.H., Libon, D.J., Delano-Wood, L., Salmon, D.P., Delis, D.C., & Bondi, M.W. (2012). Yes/No forced choice recognition memory in mild cognitive impairment and Alzheimer’s Disease: Patterns of impairment and associations with dementia severity. Clinical Neuropsychology, 26, 12011216.CrossRefGoogle ScholarPubMed
Connor, D.J., Drake, A.I., Bondi, M.W., & Delis, D.C. (1997). Detection of feigned cognitive impairments in patients with a history of mild to severe closed head injury. Paper presented at the American Academy of Neurology, Boston.Google Scholar
Davis, J.J., & Millis, S.R. (2014). Examination of performance validity test failure in relation to number of tests administered. The Clinical Neuropsychologist, 28(2), 199214.CrossRefGoogle ScholarPubMed
Delis, D.C., Kramer, J.H., Kaplan, E., & Ober, B. (2000). The California Verbal Learning Test-Second Edition. San Antonio TX: The Psychological Corporation.Google Scholar
Denning, J.H. (2012). The efficiency and accuracy of the Test of Memory Malingering Trial 1, errors on the first 10 items of the Test of Memory Malingering, and five embedded measures in predicting invalid test performance. Archives of Clinical Neuropsychology, 27(4), 417432.CrossRefGoogle ScholarPubMed
Donders, J., & Strong, C.H. (2011). Embedded effort indicators on the California Verbal Learning Test – second edition (CVLT-II): An attempted cross-validation. The Clinical Neuropsychologist, 25, 173184.CrossRefGoogle ScholarPubMed
Egeland, J., Andersson, S., Sundseth, Ø.Ø., & Schanke, A.K. (2015). Types or modes of malingering? A confirmatory factor analysis of performance and symptom validity tests. Applied Neuropsychology. Adult, 22(3), 215226.CrossRefGoogle ScholarPubMed
Eikeland, R., Ljøstad, U., Mygland, A., Herlofson, K., & Løhaugen, G.C. (2012). European neuroborreliosis: Neuropsychological findings 30 months post-treatment. European Journal of Neurology, 19, 480487.CrossRefGoogle ScholarPubMed
Erdodi, L.A., Abeare, C.A., Lichtenstein, J.D., Tyson, B.T., Kucharski, B., Zuccato, B.G., & Roth, R.M. (2016). WAIS-IV processing speed scores as measures of non-credible responding – The third generation of embedded performance validity indicators. Psychological Assessment. (Advance online publication.Google Scholar
Erdodi, L.A., Kirsch, N.L., Lajiness-O’Neill, R., Vingilis, E., & Medoff, B. (2014). Comparing the Recognition Memory Test and the Word Choice Test in a mixed clinical sample: Are they equivalent? Psychological Injury and Law, 7(3), 255263.CrossRefGoogle Scholar
Erdodi, L.A., Roth, R.M., Kirsch, N.L., Lajiness-O’Neill, R., & Medoff, B. (2014). Aggregating validity indicators embedded in Conners’ CPT-II outperforms individual cutoffs at separating valid from invalid performance in adults with traumatic brain injury. Archives of Clinical Neuropsychology, 29, 456466.CrossRefGoogle ScholarPubMed
Green, P. (2007). Spoiled for choice: Making comparisons between forced-choice effort tests. In K.B., Boone (Ed.), Assessment of feigned cognitive impairment (pp. 5077). New York, NY: Guilford Press.Google Scholar
Green, P. (2003). Word Memory Test for Windows: User’s manual and program. Edmonton, AB: Green’s Publishing.Google Scholar
Greiffenstein, M.F., Baker, W.J., & Gola, T. (1994). Validation of malingered amnesia measures with a large clinical sample. Psychological Assessment, 6, 218224.CrossRefGoogle Scholar
Hanczar, B., Hua, J., Sima, C., Weinstein, J., Bittner, M., & Dougherty, E.R. (2010). Small-sample precision of ROC-related estimates. Bioinformatics, 26(6), 822830.CrossRefGoogle ScholarPubMed
Hand, D.J. (2009). Measuring classifier performance: A coherent alternative to the area under the ROC curve. Machine Learning, 77(1), 103123.CrossRefGoogle Scholar
Heilbronner, R.L., Sweet, J.J., Morgan, J.E., Larrabee, G.J., Millis, S.R., & Conference Participants. (2009). American Academy of Clinical Neuropsychology consensus conference statement on the neuropsychological assessment of effort, response bias, and malingering. The Clinical Neuropsychologist, 23, 10931129.CrossRefGoogle ScholarPubMed
Heyanka, D.J., Thaler, N.S., Linck, J.F., Pastorek, N.J., Miller, B., Romesser, J., & Sim, A.H. (2015). A factor analytic approach to the validation of the Word Memory Test and Test of Memory Malingering as measures of effort and not memory. Archives of Clinical Neuropsychology, 30, 369376.CrossRefGoogle Scholar
Jacobs, M.L., & Donders, J. (2007). Criterion validity of the California Verbal Learning Test-Second Edition (CVLT-II) after traumatic brain injury. Archives of Clinical Neuropsychology, 22, 143149.CrossRefGoogle ScholarPubMed
Jak, A.J., Gregory, A., Orff, H.J., Colón, C., Steele, N., Schiehser, D.M., & Twamley, E.W. (2015). Neuropsychological performance in treatment-seeking Operation Enduring Freedom/Operation Iraqi Freedom Veterans with a history of mild traumatic brain injury. Journal of Clinical and Experimental Neuropsychology, 37(4), 379388.CrossRefGoogle ScholarPubMed
King, P.R., Donnelly, K.T., Wade, M., Donnelly, J.P., Dunnam, M., Warner, G., & Alt, M. (2014). The relationships among premilitary vocational aptitude assessment, traumatic brain injury, and postdeployment cognitive functioning in combat veterans. Archives of Clinical Neuropsychology, 29(4), 391402.CrossRefGoogle ScholarPubMed
Kulas, J.F., Axelrod, B.N., & Rinaldi, A.R. (2014). Cross-validation of supplemental Test of Memory Malingering scores as performance validity measures. Psychological Injury and Law, 7(3), 236244.CrossRefGoogle Scholar
Lobo, J.M., Jiménez‐Valverde, A., & Real, R. (2008). AUC: A misleading measure of the performance of predictive distribution models. Global Ecology and Biogeography, 17(2), 145151.CrossRefGoogle Scholar
Macher, R.B., & Earleywine, M. (2012). Enhancing neuropsychological performance in chronic cannabis users: The role of motivation. Journal of Clinical and Experimental Neuropsychology, 34, 405415.CrossRefGoogle ScholarPubMed
Maksimovskiy, A.L., McGlinchey, R.E., Fortier, C.B., Salat, D.H., Milberg, W.P., & Oscar-Berman, M. (2014). White matter and cognitive changes in veterans diagnosed with alcoholism and PTSD. Journal of Alcoholism and Drug Dependence, 2(1), 144.Google ScholarPubMed
Marshall, P., & Happe, M. (2007). The performance of individuals with mental retardation on cognitive tests assessing effort and motivation. The Clinical Neuropsychologist, 21, 826840.CrossRefGoogle ScholarPubMed
Miller, J.B., Millis, S.R., Rapport, L.J., Bashem, J.R., Hanks, R.A., & Axelrod, B.N. (2011). Detection of insufficient effort using advanced clinical solutions for the Wechsler Memory Scale, fourth edition. The Clinical Neuropsychologist, 25, 160172.CrossRefGoogle ScholarPubMed
Moore, B.A., & Donders, J. (2004). Predictors of invalid neuropsychological test performance after traumatic brain injury. Brain Injury, 18, 975984.CrossRefGoogle ScholarPubMed
Morse, C.L., Douglas-Newman, K., Mandel, S., & Swirsky-Sacchetti, T. (2013). Utility of the Rey-15 recognition trial to detect invalid performance in a forensic neuropsychological sample. The Clinical Neuropsychologist, 27, 13951407.CrossRefGoogle Scholar
Nelson, N.W., Hoelzle, J.B., McGuire, K.A., Ferrier-Auerbach, A.G., Charlesworth, M.J., & Sponheim, S.R. (2010). Evaluation context impacts neuropsychological performance of OEF/OIF veterans with reported combat-related concussion. Archives of Clinical Neuropsychology, 25, 713723.CrossRefGoogle ScholarPubMed
Orff, H.J., Jak, A.J., Gregory, A.M., Colón, C.C., Schiehser, D.M., Drummond, S.P., & Twamley, E.W. (2015). Sleep disturbance, psychiatric, and cognitive functioning in veterans with mild to moderate traumatic brain injury. Journal of Sleep Disorders: Treatment and Care, 4(2), 16.Google Scholar
Peleikis, D.E., Varga, M., Sundet, K., Lorentzen, S., Agartz, I., & Andreassen, O.A. (2012). Schizophrenia patients with and without Post‐traumatic Stress Disorder (PTSD) have different mood symptom levels but same cognitive functioning. Acta Psychiatrica Scandinavica, 127(6), 455463.CrossRefGoogle ScholarPubMed
Proto, D.A., Pastorek, N.J., Miller, B.I., Romesser, J.M., Sim, A.H., & Linck, J.F. (2014). The dangers of failing one or more performance validity tests in individuals claiming mild traumatic brain injury-related postconcussive symptoms. Archives of Clinical Neuropsychology, 29, 614624.CrossRefGoogle ScholarPubMed
Root, J.C., Robbins, R.N., Chang, L., & Van Gorp, W.G. (2006). Detection of inadequate effort on the California Verbal Learning Test- Second Edition: Forced choice recognition and critical item analysis. Journal of the International Neuropsychological Society, 12, 688696.CrossRefGoogle ScholarPubMed
Schiehser, D.M., Delis, D.C., Filoteo, J.V., Delano-Wood, L., Han, S.D., Jak, A.J., & Bondi, M.W. (2011). Are self-reported symptoms of executive dysfunction associated with objective executive function performance following mild to moderate traumatic brain injury? Journal of Clinical and Experimental Neuropsychology, 33(6), 704714.CrossRefGoogle ScholarPubMed
Schroeder, R.W., & Marshall, P.S. (2011). Evaluation of the appropriateness of multiple symptom validity indices in psychotic and non-psychotic psychiatric populations. The Clinical Neuropsychologist, 25, 437453.CrossRefGoogle ScholarPubMed
Schutte, D., Millis, S., Axelrod, B., & VanDyke, S. (2011). Derivation of a composite measure of embedded symptom validity indices. The Clinical Neuropsychologist, 25, 454462.CrossRefGoogle ScholarPubMed
Shura, R.D., Miskey, H.M., Rowland, J.A., Yoash-Gantz, R.E., & Denning, J.H. (2015). Embedded performance validity measures with postdeployment veterans: Cross-validation and efficiency with multiple measures. Applied Neuropsychology. Adult, 23, 94104.CrossRefGoogle ScholarPubMed
Silk-Eglit, G.M., Stenclik, J.H., Gavett, B.E., Adams, J.W., Lynch, J.K., & Mccaffrey, R.J. (2014). Base rate of performance invalidity among non-clinical undergraduate research participants. Archives of Clinical Neuropsychology, 29(5), 415421.CrossRefGoogle ScholarPubMed
Spencer, R.J., Axelrod, B.N., Drag, L.L., Waldron-Perrine, B., Pangilinan, P.H., & Bieliauskas, L.A. (2013). WAIS-IV reliable digit span is no more accurate than age corrected scaled score as an indicator of invalid performance in a veteran sample undergoing evaluation for mTBI. The Clinical Neuropsychologist, 27(8), 13621372.CrossRefGoogle Scholar
Sugarman, M.A., & Axelrod, B.N. (2015). Embedded measures of performance validity using verbal fluency tests in a clinical sample. Applied Neuropsychology. Adult, 22(2), 141146.CrossRefGoogle Scholar
Sugarman, M.A., Holcomb, E.M., Axelrod, B.N., Meyers, J.E., & Liethen, P.C. (2015). Embedded measures of performance validity in the Rey complex figure test in a clinical sample of veterans. Applied Neuropsychology. Adult, 23, 105114.CrossRefGoogle Scholar
Sweet, J.J., Benson, L.M., Nelson, N.W., & Moberg, P.J. (2015). The American Academy of Clinical Neuropsychology, National Academy of Neuropsychology, and Society for Clinical Neuropsychology (APA Division 40) 2015 TCN professional practice and ‘salary survey’: Professional practices, beliefs, and incomes of US neuropsychologists. The Clinical Neuropsychologist, 29(8), 10691162.CrossRefGoogle ScholarPubMed
Tarescavage, A.M., Wygant, D.B., Gervais, R.O., & Ben-Porath, Y.S. (2013). Association between the MMPI-2 Restructured Form (MMPI-2-RF) and malingered neurocognitive dysfunction among non-head injury disability claimants. The Clinical Neuropsychologist, 27(2), 313335.CrossRefGoogle ScholarPubMed
Tombaugh, T.N. (1996). Test of Memory Malingering (TOMM). New York: Multi-Health Systems, Inc. Google Scholar
Wolfe, P.L., Millis, S.R., Hanks, R., Fichtenberg, N., Larrabee, G.J., & Sweet, J.J. (2010). Effort indicators within the California verbal learning test-II (CVLT-II). The Clinical Neuropsychologist, 24(1), 153168.CrossRefGoogle ScholarPubMed
Yochim, B.P., Kane, K.D., Horning, S., & Pepin, R. (2010). Malingering or expected deficits? A case of herpes simplex encephalitis. Neurocase, 16, 451460.CrossRefGoogle ScholarPubMed