Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-q99xh Total loading time: 0 Render date: 2024-12-27T16:13:51.194Z Has data issue: false hasContentIssue false

Part II - Specific Clinical Assessment Methods

Published online by Cambridge University Press:  06 December 2019

Martin Sellbom
Affiliation:
University of Otago, New Zealand
Julie A. Suhr
Affiliation:
Ohio University
Get access

Summary

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'
Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2019

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

References

American Counseling Association. (2014). The American Counseling Association code of ethics. Alexandria, VA: Author.Google Scholar
American Psychiatric Association. (1980). Diagnostic and statistical manual of mental disorders (3rd ed.). Washington, DC: Author.Google Scholar
American Psychiatric Association. (1994). Diagnostic and statistical manual of mental disorders (3rd ed., rev.). Washington, DC: Author.Google Scholar
American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Washington, DC: Author.Google Scholar
APA (American Psychological Association). (2010). Ethical principles for psychologists and code of conduct. Washington, DC: Author.Google Scholar
Bahorik, A. L., Newhill, C. E., Queen, C. C., & Eack, S. M. (2014). Under-reporting of drug use among individuals with schizophrenia: Prevalence and predictors. Psychological Medicine, 44(1), 6169.Google Scholar
Basterra, M. D., Trumbull, E., & Solano-Flores, G. (2011). Cultural validity in assessment: Addressing linguistic and cultural diversity. New York: Routledge.CrossRefGoogle Scholar
Edens, J. F., Kelley, S. E., Lilienfeld, S. O., Skeem, J. L., & Douglas, K. S. (2015). DSM-5 antisocial personality disorder: Predictive validity in a prison sample. Law and Human Behavior, 39(2), 123129.Google Scholar
Elkind, D. (1964). Piaget’s semi-clinical interview and the study of spontaneous religion. Journal for the Scientific Study of Religion, 4, 4047.Google Scholar
First, M. B., Williams, J. B. W., Karg, R. S., & Spitzer, R. L. (2016). Structured Clinical Interview for DSM-5 Disorders, clinician version (SCID-5-CV). Arlington, VA: American Psychiatric Association.Google Scholar
Folstein, M. F., Folstein, S. E., & McHugh, P. R. (1975). Mini-mental state: A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research, 12(3), 189198.Google Scholar
Frances, A. (2013). Essentials of psychiatric diagnosis: Responding to the challenge of DSM-5 (rev ed.). New York: Guilford Press.Google Scholar
Ganzini, L., Denneson, L. M., Press, N., Bair, M. J., Helmer, D. A., Poat, J., & Dobscha, S. K. (2013). Trust is the basis for effective suicide risk screening and assessment in veterans. Journal of General Internal Medicine, 28(9), 12151221.CrossRefGoogle ScholarPubMed
Green, D., & Rosenfeld, B. (2011). Evaluating the gold standard: A review and meta-analysis of the structured interview of reported symptoms. Psychological Assessment, 23(1), 95107.Google Scholar
Green, D., Rosenfeld, B., & Belfi, B. (2013). New and improved? A comparison of the original and revised versions of the structured interview of reported symptoms. Assessment, 20(2), 210218.Google Scholar
Hanley, T., & Reynolds, D. J. (2009). Counselling psychology and the internet: A review of the quantitative research into online outcomes and alliances within text-based therapy. Counselling Psychology Review, 24(2), 413.Google Scholar
Hays, P. A. (2016). Addressing cultural complexities in practice: Assessment, diagnosis, and therapy (3rd ed.). Washington, DC: American Psychological Association.CrossRefGoogle Scholar
Hook, J. N., Davis, D. E., Owen, J., Worthington, E. L., & Utsey, S. O. (2013). Cultural humility: Measuring openness to culturally diverse clients. Journal of Counseling Psychology, 60(3), 353366.Google Scholar
Hormes, J. M., Gerhardstein, K. R., & Griffin, P. T. (2012). Under-reporting of alcohol and substance use versus other psychiatric symptoms in individuals living with HIV. AIDS Care, 24(4), 420423.CrossRefGoogle ScholarPubMed
Hoyt, M. F. (2000). Some stories are better than others: Doing what works in brief therapy and managed care. Philadelphia: Brunner/Mazel.Google Scholar
Jobes, D. A. (2016). Managing suicidal risk: A collaborative approach (2nd ed.). New York: Guilford Press.Google Scholar
Joiner, T. (2005). Why people die by suicide. Cambridge, MA: Harvard University Press.Google Scholar
Kroshus, E., Kubzansky, L. D., Goldman, R. E., & Austin, S. B. (2015). Norms, athletic identity, and concussion symptom under-reporting among male collegiate ice hockey players: A prospective cohort study. Annals of Behavioral Medicine, 49(1), 95103.CrossRefGoogle ScholarPubMed
Kutchins, H., & Kirk, S. A. (1997). Making us crazy. New York: Free Press.Google Scholar
Large, M. M., & Ryan, C. J. (2014). Suicide risk categorisation of psychiatric inpatients: What it might mean and why it is of no use. Australasian Psychiatry, 22(4), 390392.CrossRefGoogle ScholarPubMed
Lilienfeld, S. O., Smith, S. F., & Watts, A. L. (2013). Issues in diagnosis: Conceptual issues and controversies. In Craighead, W. E. & Miklowitz, D. J. (Eds.), Psychopathology: History, diagnosis, and empirical foundations (2nd ed., pp. 135). Hoboken, NJ: Wiley.Google Scholar
Lobbestael, J., Leurgans, M., & Arntz, A. (2011). Inter‐rater reliability of the structured clinical interview for DSM‐IV axis I disorders (SCID I) and axis II disorders (SCID II). Clinical Psychology and Psychotherapy, 18(1), 7579.Google Scholar
Miller, W. R., & Rollnick, S. (2013). Motivational interviewing: Preparing people for change (3rd ed.). New York: Guilford Press.Google Scholar
Norcross, J. C., & Lambert, M. J. (2011). Psychotherapy relationships that work II. Psychotherapy: Theory, Research, and Practice, 48, 48.Google Scholar
Richman, W. L., Weisband, S., Kiesler, S., & Drasgow, F. (1999). A meta-analytic study of social desirability distortion in computer-administered questionnaires, traditional questionnaires, and interviews. Journal of Applied Psychology, 84(5), 754775.Google Scholar
Rogers, C. R. (1957). The necessary and sufficient conditions of therapeutic personality change. Journal of Consulting Psychology, 21, 95103.CrossRefGoogle ScholarPubMed
Rogers, R. (2008). Clinical assessment of malingering and deception (3rd ed.). New York: Guilford Press.Google Scholar
Rogers, R., Sewell, K. W., & Gillard, N. D. (2010). SIRS-2: Structured Interview of Reported Symptoms: Professional manual. Odessa, FL: Psychological Assessment Resources.Google Scholar
Salamon, S., Santelmann, H., Franklin, J., & Baethge, C. (2018). Test-retest reliability of the diagnosis of schizoaffective disorder in childhood and adolescence: A systematic review and meta-analysis. Journal of Affective Disorders, 230, 2833.Google Scholar
Sellbom, M., & Hopwood, C. J. (2016). Evidence‐based assessment in the 21st century: Comments on the special series papers. Clinical Psychology: Science and Practice, 23(4), 403409.Google Scholar
Shea, S. C. (1998). Psychiatric interviewing: The art of understanding (2nd ed.). Philadelphia: Saunders.Google Scholar
Solano-Flores, G., & Nelson-Barber, S. (2001). On the cultural validity of science assessments. Journal of Research in Science Teaching, 38, 553573. https://doi.org/10.1002/tea.1018Google Scholar
Sommers-Flanagan, J. (2016). Clinical interview. In Norcross, J. C., VandenBos, G. R., & Freedheim, D. K. (Eds.), APA handbook of clinical psychology (pp. 316). Washington, DC: American Psychological Association.Google Scholar
Sommers-Flanagan, J. (2018). Conversations about suicide: Strategies for detecting and assessing suicide risk. Journal of Health Service Psychology, 44, 3345.Google Scholar
Sommers-Flanagan, J., & Sommers-Flanagan, R. (1998). Assessment and diagnosis of conduct disorder. Journal of Counseling and Development, 76, 189197.CrossRefGoogle Scholar
Sommers-Flanagan, J., & Sommers-Flanagan, R. (2017). Clinical interviewing (6th ed.). Hoboken, NJ: Wiley.Google Scholar
Strub, R. L., & Black, F. W. (1977). The mental status exam in neurology. Philadelphia: Davis.Google Scholar
Sue, D. W., & Sue, D. (2016). Counseling the culturally diverse (7th ed.). Hoboken, NJ: Wiley.Google Scholar
Sue, S. (1998). In search of cultural competence in psychotherapy and counseling. American Psychologist, 53(4), 440448.Google Scholar
Suhr, J. A. (2015). Psychological assessment: A problem-solving approach. New York: Guilford Press.Google Scholar
Suhr, J. A., & Berry, D. T. R. (2017). The importance of assessing for validity of symptom report and performance in attention deficit/hyperactivity disorder (ADHD): Introduction to the special section on noncredible presentation in ADHD. Psychological Assessment, 29(12), 14271428.Google Scholar
Suhr, J. A., Cook, C., & Morgan, B. (2017). Assessing functional impairment in ADHD: Concerns for validity of self-report. Psychological Injury and Law, 10(2), 151160.Google Scholar
Sullivan, B. K., May, K., & Galbally, L. (2007). Symptom exaggeration by college adults in attention-deficit hyperactivity disorder and learning disorder assessments. Applied Neuropsychology, 14(3), 189207.Google Scholar
Tolin, D. F., Gilli Tolin, D. F., Gilliam, C., Wootton, B. M., Bowe, W., Bragdon, L. B. et al. (2018). Psychometric properties of a structured diagnostic interview for DSM-5 anxiety, mood, and obsessive-compulsive and related disorders. Assessment, 25(1), 313.CrossRefGoogle ScholarPubMed
Vannoy, S. D., Andrews, B. K., Atkins, D. C., Dondanville, K. A., Young-McCaughan, S., & Peterson, A. L. (2017). Under reporting of suicide ideation in US Army population screening: An ongoing challenge. Suicide and Life-Threatening Behavior, 47(6), 723728.Google Scholar
Welfel, E. R. (2016). Ethics in counseling and psychotherapy: Standards, research, and emerging issues (6th ed.). Boston: Cengage.Google Scholar
Wiarda, N. R., McMinn, M. R., Peterson, M. A., & Gregor, J. A. (2014). Use of technology for note taking and therapeutic alliance. Psychotherapy, 51(3), 443446.Google Scholar
Widiger, T. A., & Edmundson, M. (2011). Diagnoses, dimensions, and DSM-5. In Barlow, D. H. (Ed.), The Oxford handbook of clinical psychology (pp. 254278). New York: Oxford University Press.Google Scholar
Yalom, I. D. (2002). The gift of therapy. New York: HarperCollins.Google Scholar
Zander, E., Willfors, C., Berggren, S., Coco, C., Holm, A., Jifält, I. et al. (2017). The interrater reliability of the autism diagnostic interview-revised (ADI-R) in clinical settings. Psychopathology, 50(3), 219227.Google Scholar

References

Achenbach, T. M. (2020). Integration, progress, and outcomes app for ages 6–18. Burlington, VT: University of Vermont, Research Center for Children, Youth, & Families.Google Scholar
Achenbach, T. M., Ivanova, M. Y., Rescorla, L. A., Turner, L. V., & Althoff, R. R. (2016). Internalizing/externalizing problems: Review and recommendations for clinical and research applications. Journal of the American Academy of Child and Adolescent Psychiatry, 55, 647656.Google Scholar
Achenbach, T. M., Krukowski, R. A., Dumenci, L., & Ivanova, M. Y. (2005). Assessment of adult psychopathology: Meta-analyses and implications of cross-informant correlations. Psychological Bulletin, 131, 361382. https://doi.org/10.1037/0033-2909.131.3.361Google Scholar
Achenbach, T. M., McConaughy, S. H., & Howell, C. T. (1987). Child/adolescent behavioral and emotional problems: Implications of cross-informant correlations for situational specificity. Psychological Bulletin, 101, 213232. https://doi.org/10.1037/0033-2909.101.2.213Google Scholar
Achenbach, T. M., Newhouse, P. A ., & Rescorla, L. A. (2004). Manual for the ASEBA older adult forms & profiles. Burlington, VT: University of Vermont Research Center for Children, Youth, and Families.Google Scholar
Achenbach, T. M., & Rescorla, L. A. (2001). Manual for the ASEBA school-age forms & profiles. Burlington, VT: University of Vermont Research Center for Children, Youth, and Families.Google Scholar
Achenbach, T. M., & Rescorla, L. A. (2003). Manual for the ASEBA adult forms & profiles. Burlington, VT: University of Vermont Research Center for Children, Youth, and Families.Google Scholar
Achenbach, T. M., & Rescorla, L. A. (2007). Multicultural supplement to the Manual for the ASEBA School-Age Forms & Profiles. Burlington, VT: University of Vermont Research Center for Children, Youth, and Families.Google Scholar
Achenbach, T. M., & Rescorla, L. A. (2019). Multicultural supplement to the Manual for the ASEBA Older Adult Forms & Profiles. Burlington, VT: University of Vermont Research Center for Children, Youth, and Families.Google Scholar
Achenbach, T. M., Rescorla, L. A., & Ivanova, M. Y. (2015). Guide to family assessment using the ASEBA. Burlington, VT: University of Vermont Research Center for Children, Youth, and Families.Google Scholar
American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Washington, DC: Author.Google Scholar
Barthlow, D. L., Graham, J. R., Ben-Porath, Y. S., Tellegen, A., & Mcnulty, J. L. (2002). The appropriateness of the MMPI-2 K correction. Assessment, 9, 219229.Google Scholar
Ben-Porath, Y., & Tellegen, A. (2008/2011). Minnesota Multiphasic Personality Inventory-Restructured Form: Manual for administration, scoring, and interpretation. Minneapolis: University of Minnesota Press.Google Scholar
Brigidi, B. D., Achenbach, T .M., Dumenci, L., & Newhouse., P. A. (2010). Broad spectrum assessment of psychopathology and adaptive functioning with the Older Adult Behavior Checklist: A validation and diagnostic discrimination study. International Journal of Geriatric Psychiatry, 25, 11771185.Google Scholar
Burchett, D., Dragon, W. R., Smith Holbert, A. M., Tarescavage, A. M., Mattson, C. A., Handel, R. W., & Ben-Porath, Y. S. (2016). “False Feigners”: Examining the impact of non-content-based invalid responding on the Minnesota Multiphasic Personality Inventory-2 Restructured Form content-based invalid responding indicators. Psychological Assessment, 28, 458470. http://dx.doi.org/10.1037/pas0000205Google Scholar
Butcher, J. N., Dahlstrom, W. G., Graham, J.R ., Tellegen, A., & Kaemmer, B. (1989). Minnesota Multiphasic Personality Inventory (MMPI-2). Manual for administration and scoring. Minneapolis: University of Minnesota Press.Google Scholar
Connelly, B. S., & Ones, D. S. (2010). An other perspective on personality: Meta-analytic integration of observers’ accuracy and predictive validity. Psychological Bulletin, 136, 10921122.Google Scholar
De Los Reyes, A., Augenstein, T. M., Wang, M., Thomas, S. A., Drabick, D. A. G., Burgess, D. E., & Rabinowitz, J. (2015). The validity of the multi-informant approach to assessing child and adolescent mental health. Psychological Bulletin, 141, 858900. https://doi.org/10.1037/a0038498CrossRefGoogle ScholarPubMed
De Los Reyes, A., & Kazdin, A. E. (2005). Informant discrepancies in the assessment of childhood psychopathology: A critical review, theoretical framework, and recommendations for further study. Psychological Bulletin, 131, 483509. https://doi.org/10.1037/0033-2909.131.4.483Google Scholar
Ferdinand, R. F., Hoogerheide, K. N., van der Ende, J., Heijmens Visser, J. H., Koot, H. M., Kasius, M. C., & Verhulst, F. C. (2003). The role of the clinician: Three-year predictive value of parents’, teachers’, and clinicians’ judgment of childhood psychopathology. Journal of Child Psychology and Psychiatry, 44, 867876.CrossRefGoogle ScholarPubMed
Folstein, M. F., Folstein, S. E., & McHugh, P. R. (1975). “Mini-mental state.” A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatry Research, 12, 189198.Google Scholar
Hathaway, S. R., & McKinley, J. C. (1943). The Minnesota Multiphasic Personality Schedule. Minneapolis: University of Minnesota Press.Google Scholar
Ingram, P. B., & Ternes, M. S. (2016). The detection of content-based invalid responding: a meta-analysis of the MMPI-2-Restructured Form’s (MMPI-2-RF) over-reporting validity scales. The Clinical Neuropsychologist, 30, 473496. http://dx.doi.org/10.1080/13854046.2016.1187769CrossRefGoogle ScholarPubMed
Klein, D. N. (2003). Patients’ versus informants’ reports of personality disorders in predicting 7½ year outcome in outpatients with depressive disorders. Psychological Assessment, 15, 216222.Google Scholar
Loranger, A. W. (1988). Personality Disorder Examination (PDE) Manual. Yonkers, NY: DV Communications.Google Scholar
Martel, M. M., Markon, K., & Smith, G. T. (2017). Research review: Multi-informant integration in child and adolescent psychopathology diagnosis. Journal of Child Psychology and Psychiatry, 58, 116128.Google Scholar
McConaughy, S. H., & Achenbach, T. M. (2001). Manual for the Semistructured Clinical Interview for Children and Adolescents (2nd ed.). Burlington, VT: University of Vermont Research Center for Children, Youth, and Families.Google Scholar
Meyer, G. J., Finn, S. E., Eyde, L. D., Kay, G. G., Moreland, K. L., Dies, R. R. et al. (2001). Psychological testing and psychological assessment: A review of evidence and issues. American Psychologist, 56, 128165.Google Scholar
Rescorla, L. A., Achenbach, T. M., Ivanova, M. Y., Bilenberg, N., Bjarnadottir, G., Denner, S., et al. (2012). Behavioral/emotional problems of preschoolers: Caregiver/teacher reports from 15 societies. Journal of Emotional and Behavioral Disorders, 20, 6881. https://doi.org/10.1177/1063426611434158Google Scholar
Rescorla, L. A., Achenbach, T. M., Ivanova, M. Y., Turner, L. V., Árnadóttir, H. A., Au, A. et al. (2016). Collateral reports of problems and cross-informant agreement about adult psychopathology in 14 societies. Journal of Psychopathology and Behavioral Assessment, 38, 381397.Google Scholar
Rescorla, L. A., Bochicchio, L., Achenbach, T. M., Ivanova, M. Y., Almqvist, F., Begovac, I. et al. (2014). Parent-teacher agreement on children’s problems in 21 societies. Journal of Clinical Child and Adolescent Psychology, 43, 627642. https://doi.org/10.1080/15374416.2014.900719Google Scholar
Rescorla, L. A., Ewing, G., Ivanova, M. Y., Aebi, M., Bilenberg, N., Dieleman, G. C. et al. (2017). Parent-adolescent cross-informant agreement in clinically referred samples: Findings from seven societies. Journal of Clinical Child and Adolescent Psychology, 46, 7487.Google Scholar
Rescorla, L., Ginzburg, S., Achenbach, T. M., Ivanova, M. Y., Almqvist, F., Begovac, I. et al. (2013). Cross-informant agreement between parent-reported and adolescent self-reported problems in 25 societies. Journal of Clinical Child and Adolescent Psychology, 42, 262273.CrossRefGoogle ScholarPubMed
Sharf, A. J., Rogers, R., Williams, M. M., & Henry, S. A. (2017). The effectiveness of the MMPI-2-RF in detecting feigned mental disorders and cognitive deficits: A meta-analysis. Journal of Psychopathology and Behavioral Assessment, 39, 441455. http://dx.doi.org.10.1007/s10862-017-9590-1Google Scholar
Tierney, M., Herrmann, N., Geslani, D. M., & Szalai, J. P. (2003). Contribution of informant and patient ratings to the accuracy of the Mini-Mental State Examination in predicting probable Alzheimer’s disease. Journal of the American Geriatrics Society, 51, 813818.Google Scholar
Vidair, H. B., Reyes, J. A., Shen, S., Parrilla-Escobar, M. A., Heleniak, C. M., Hollin, I. L. et al. (2011). Screening parents during child evaluations: Exploring parent and child psychopathology in the same clinic. Journal of the American Academy of Child and Adolescent Psychiatry, 50, 441450. https://doi.org/10.1016/j.jaac.2011.02.002Google Scholar

References

Abad, F. J., Sorrel, M. A., Roman, F. J., & Colom, R. (2016). The relationships between WAIS-IV factor index scores and educational level: A bifactor model approach. Psychological Assessment, 28, 9871000.Google Scholar
AERA (American Educational Research Association), APA (American Psychological Association), & NCME (National Council on Measurement in Education). (2014). The standards for educational and psychological testing. Washington, DC: Author.Google Scholar
Ang, S., VanDyne, L., & Tan, M. I. (2011). Cultural intelligence. In Sternberg, R. J. & Kaufman, S. B. (Eds.) Cambridge handbook of intelligence (pp. 582602). Cambridge: Cambridge University Press.Google Scholar
APA (American Psychological Association). (2017). Multicultural guidelines: An ecological approach to context, identity, and intersectionality. www.apa.org/about/policy/multicultural-guidelines.pdfGoogle Scholar
Archer, R. P., Buffington-Vollum, J. K., Stredny, R., & Handel, R. W. (2006). A survey of psychological test use patterns among forensic psychologists. Journal of Personality Assessment, 87, 8494. https://doi.org/10.1207/s15327752jpa8701_07Google Scholar
Bayley, N. (2006). Bayley scales of infant and toddler development (3rd ed.). San Antonio, TX: Harcourt Assessment.Google Scholar
Benson, N., Hulac, D. M., & Kranzler, J. H. (2010). Independent examination of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV): What does the WAIS-IV measure? Psychological Assessment, 22(1), 121130. https://doi.org/10.1037/a0017767Google Scholar
Bjorklund, D. F. (2012). Children’s thinking: Cognitive development and individual differences (5th ed.). Belmont, CA: Wadsworth/Cengage Learning.Google Scholar
Blalock, L. D., & McCabe, D. P. (2011). Proactive interference and practice effects in visuospatial working memory span task performance. Memory, 19(1), 8391. https://doi.org/10.1080/09658211.2010.537035Google Scholar
Bowden, S. C., Saklofske, D. H., & Weiss, L. G. (2011a). Augmenting the core battery with supplementary subtests: Wechsler Adult Intelligence Scale-IV measurement invariance across the United States and Canada. Assessment, 18, 133140.Google Scholar
Bowden, S. C., Saklofske, D. H., & Weiss, L. G. (2011b). Invariance of the measurement model underlying the Wechsler Adult Intelligence Scale-IV in the United States and Canada. Educational and Psychological Measurement, 71, 186189.Google Scholar
Bracken, B. A. (1998). Bracken basic concept scale – Revised. San Antonio, TX: Psychological Corporation.Google Scholar
Bracken, B.A., & McCallum, R.S. (1998). Universal nonverbal intelligence test. Riverside Publishing.Google Scholar
Brickman, A. M., Cabo, R., & Manly, J. J. (2006). Ethical issues in cross-cultural neuropsychology. Applied Neuropsychology, 13, 91100.Google Scholar
Brown, T. E. (1996). Brown attention deficit disorder scales for adolescents and adults. Bloomington, MN: Pearson.Google Scholar
Bunting, M. (2006). Proactive interference and item similarity in working memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(2), 183196. https://doi.org/10.1037/0278-7393.32.2.183Google Scholar
Camara, W. J., Nathan, J. S., & Puente, A. E. (2000). Psychological test usage: Implications in professional psychology. Professional Psychology: Research and Practice, 31(2), 141154.Google Scholar
Canivez, G. L., Watkins, M. W., & Dombrowski, S. C. (2016). Factor structure of the Wechsler Intelligence Scale for Children-Fifth Edition: Exploratory factor analyses with the 16 primary and secondary subtests. Psychological Assessment, 28(8), 975986.Google Scholar
Canivez, G. L., Watkins, M. W., & Dombrowski, S. C. (2017). Structural validity of the Wechsler Intelligence Scale for Children-Fifth Edition: Confirmatory factor analyses with the 16 primary and secondary subtests. Psychological Assessment, 29(4), 458472.Google Scholar
Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge: Cambridge University Press.Google Scholar
Cattell, R. B. (1941). Some theoretical issues in adult intelligence testing. Psychological Bulletin, 38, 592.Google Scholar
Cattell, R. B. (1943). The measurement of adult intelligence. Psychological Bulletin, 40, 153193.Google Scholar
Chen, H., Zhang, O., Raiford, S. E., Zhu, J., & Weiss, L. G. (2015). Factor invariance between genders on the Wechsler Intelligence Scale for Children-Fifth Edition. Personality and Individual Differences, 86, 15.Google Scholar
Cohen, M. J. (1997). Children’s memory scale. San Antonio, TX: Psychological Corporation.Google Scholar
Cottrell, J. M., & Barrett, C. A. (2017). Examining school psychologists’ perspectives about specific learning disabilities: Implications for practice. Psychology in the Schools, 54 (3), 294308. doi:10.1002/pits.21997Google Scholar
Council of State Directors for the Gifted and National Association of Gifted and Talented. (2015). State of the States in Gifted Education: Policy and Practice Data. www.nagc.org/sites/default/files/key%20reports/2014–2015%20State%20of%20the%20States%20%28final%29.pdfGoogle Scholar
Daniel, M. H. (2012). Q-interactive technical report 1: Equivalence of Q-interactive administered cognitive tasks: WAIS-IV. Bloomington, MN: Pearson. www.helloq.com/research.htmlGoogle Scholar
Daniel, M. H., Wahlstrom, D., & Zhang, O. (2014). Q-interactive technical report 8: Equivalence of Q-interactive and paper administrations cognitive tasks: WISC-V. Bloomington, MN: Pearson. www.helloq.com/research.htmlGoogle Scholar
Das, J. P., Naglieri, J. A., & Kirby, J. R. (1994). Assessment of cognitive processes: The PASS theory of intelligence. Needham Heights, MA: Allyn & Bacon.Google Scholar
Deary, I. J. (2014). The stability of intelligence from childhood to old age. Current Directions in Psychological Science, 23, 239245.CrossRefGoogle Scholar
Deary, I. J., Weiss, A., & Batty, G. D. (2010). Intelligence and personality as predictors of illness and death: How researchers in differential psychology and chronic disease epidemiology are collaborating to understand and address health inequalities. Psychological Science in the Public Interest, 11(2), 5379.Google Scholar
Deary, I. J., Yang, J., Davies, G., Harris, S. E., Tenesa, A., Liewald, D. et al. (2012). Genetic contributions to stability and change in intelligence from childhood to old age. Nature, 482, 212215.Google Scholar
Dehn, M. J. (2013). Enhancing SLD diagnoses through the identification of psychological processing deficits. The Educational and Developmental Psychologist, 30(2), 119139. https://doi.org/10.1017/edp.2013.19Google Scholar
Delis, D. C., Kaplan, E., & Kramer, J. H. (2001). Delis-Kaplan executive function system. San Antonio, TX: Psychological Corporation.Google Scholar
Delis, D. C., Kramer, J. H., Kaplan, E., & Ober, B. A. (2000). California verbal learning test (2nd ed.). San Antonio, TX: Psychological Corporation.Google Scholar
Denney, D. A., Ringe, W. K., & Lacritz, L. H. (2015). Dyadic short forms of the Wechsler Adult Intelligence Scale-IV. Archives of Clinical Neuropsychology, 30(5), 404412. https://doi.org/10.1093/arclin/acv035Google Scholar
DiStefano, C., & Dombrowski, S. C. (2006). Investigating the theoretical structure of the Stanford–Binet-Fifth Edition. Journal of Psychoeducational Assessment, 24(2), 123126. https://doi.org/10.1177/0734282905285244Google Scholar
Dombrowski, S. C., McGill, R. J., & Canivez, G. L. (2016). Exploratory and hierarchical factor analysis of the WJ-IV Cognitive at school age. Psychological Assessment. 29(4), 394407. https://doi.org/dx.doi.org/10.1037/pas0000350Google Scholar
Dombrowski, S. C., McGill, R. J., & Canivez, G. L. (2018). An alternative conceptualization of the theoretical structure of the Woodcock-Johnson IV Tests of Cognitive Abilities at school age: A confirmatory factor analytic investigation. Archives of Scientific Psychology, 6, 113. http://dx.doi.org/10.1037/arc0000039Google Scholar
Drozdick, L. W., Getz, K., Raiford, S. E., & Zhang, O. (2016). Q-interactive technical report 14: WPPSI-IV: Equivalence of Q-interactive and paper formats. Bloomington, MN: Pearson. www.helloq.com/research.htmlGoogle Scholar
Drozdick, L. W., Holdnack, J. A., Weiss, L. G., & Zhou, X. (2013). Overview of the WAIS-IV/WMS-IV/ACS. In Holdnack, J. A., Drozdick, L W., Weiss, L. G., & Iverson, G. L. (Eds.), WAIS-IV, WMS-IV, and ACS: Advanced clinical interpretation (pp. 173). San Diego, CA: Academic Press.Google Scholar
Drozdick, L. W., Singer, J. K., Lichtenberger, E. O., Kaufman, J. C., Kaufman, A. S., & Kaufman, N. L. (2018). The Kaufman Assessment Battery for Children-second edition and the KABC-II Normative Update. In Flanagan, D. & Harrison, P. (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (4th ed.). New York: Guilford Press.Google Scholar
Dunn, L. M., & Dunn, D. M. (2007). Peabody picture vocabulary test (4th ed.). San Antonio, TX: Pearson.Google Scholar
Ehrler, D. J., & McGhee, R. L. (2008). Primary test of nonverbal intelligence. Austin, TX: PRO-ED.Google Scholar
Elliott, C. D. (2007). Differential ability scales (2nd ed.). San Antonio, TX: Harcourt Assessment.Google Scholar
Elliott, C. D. (2012). Differential ability scales: Early Years Spanish supplement (2nd ed.). Bloomington, MN: Pearson.Google Scholar
Flanagan, D. P., & Alfonso, V. C. (2017). Essentials of WISC-V assessment. Hoboken, NJ: John Wiley & Sons.Google Scholar
Flanagan, D. P., & Dixon, S. G. (2014). The Cattell-Horn-Carroll theory of cognitive abilities. In Encyclopedia of Special Education. Hoboken, NJ: John Wiley & Sons. https://doi.org/10.1002/9781118660584.ese0431Google Scholar
Flanagan, D. P., & McGrew, K. S. (1998). Interpreting intelligence tests from contemporary Gf-Gc theory: Joint confirmatory factor analysis of the WJ-R and KAIT in a non-white sample. Journal of School Psychology, 36(2), 151182. https://doi.org/10.1016/s0022-4405(98)00003-xGoogle Scholar
Flanagan, D. P., McGrew, K. S., & Ortiz, S. O. (2000). The Wechsler intelligence scales and Gf-Gc theory: A contemporary approach to interpretation. Boston, MA: Allyn & Bacon.Google Scholar
Flanagan, D. P., Ortiz, S. O., & Alfonso, V. C. (2013). Essentials of cross-battery assessment (3rd ed.). Hoboken, NJ: John Wiley & Sons.Google Scholar
Fletcher-Janzen, E. (2003). A validity study of the KABC-II and the Taos Pueblo children of New Mexico. Circle Pines, MN: American Guidance Service.Google Scholar
Fletcher-Janzen, E., Strickland, T. L., & Reynolds, C. (2000). Handbook of cross-cultural neuropsychology. New York: Springer Publishing.Google Scholar
Flynn, J. R. (1984). The mean IQ of Americans: Massive gains 1932 to 1978. Psychological Bulletin, 95(1), 2951. https://doi.org/10.1037//0033-2909.95.1.29Google Scholar
Flynn, J. R. (1987). Massive IQ gains in 14 nations: What IQ tests really measure. Psychological Bulletin, 101(2), 171197. https://doi.org/10.1037/0033-2909.101.2.171Google Scholar
Flynn, J. R. (1999). Searching for justice: The discovery of IQ gains over time. American Psychologist, 54(1), 520. https://doi.org/10.1037//0003-066x.54.1.5Google Scholar
Ford, D. Y. (2004). The National Research Center for the Gifted and Talented Senior Scholar Series: Intelligence testing and cultural diversity: Concerns, cautions, and considerations. Nashville, TN: Vanderbilt University.Google Scholar
Ford, L., Kozey, M. L., & Negreiros, J. (2012). Cognitive assessment in early childhood: Theoretical and practical perspectives. In Flanagan, D. P. & Harrison, P. L. (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (3rd ed., pp. 585622). New York: Guilford Press.Google Scholar
Fujita, K., Ishikuma, T., Aoyama, S., Hattori, T., Kumagai, K., & Ono, J. (2011). Theoretical foundation and structure of the Japanese version of KABC-II. Japanese Journal of K-ABC Assessment, 13, 8999. [In Japanese.]Google Scholar
Goddard, H. H. (1912). The Kallikak family: A study in the heredity of feeble-mindedness. New York: The Macmillan Company.Google Scholar
Goldstein, G. A., Allen, D. N., Minshew, N. J., Williams, D. L., Volkmar, F., Klin, A., & Schultz, R. T. (2008). The structure of intelligence in children and adults with high functioning autism. Neuropsychology, 22(3), 301312.Google Scholar
Goldstein, G., & Saklofske, D. H. (2010). The Wechsler Intelligence Scales in the assessment of psychopathology. In Wess, L. G., Saklofske, D. H., Coalson, D., & Raiford, S. E. (Eds.), WAIS-IV clinical use and interpretation: Scientist-practitioner perspectives (pp. 189216). London: Academic Press.Google Scholar
Gottfredson, L. S., & Deary, I. J. (2004). Intelligence predict health and longevity, but why? Current Directions in Psychological Science, 13, 14.Google Scholar
Gregoire, J., Daniel, M., Llorente, A. M., & Weiss, L. G. (2016). The Flynn effect and its clinical implications. In Weiss, L. G., Saklofske, D. H., Holdnack, J. A., & Prifitera, A. (Eds.), WISC-V assessment and interpretation: Scientist-practitioner perspectives (pp. 187212). San Diego, CA: Academic Press. https://doi.org/10.1016/B978-0-12-404697-9.00006-6Google Scholar
Green, P. (2004). Medical symptom validity test. Kelowana, BC: Paul Green Publishing.Google Scholar
Greiffenstein, M. F., Baker, W. J., & Gola, T. (1994). Validation of malingered amnesia measures with a large clinical sample. Psychological Assessment, 6, 218224.Google Scholar
Hammill, D. D., Pearson, N. A., & Wiederholt, J. L. (2009). Comprehensive test of nonverbal intelligence (2nd ed.). Austin, TX: PRO-ED.Google Scholar
Hammill, D. D., Weiderholt, J. L., & Allen, E. A. (2014). Test of silent contextual reading fluency (2nd ed.). Austin, TX: PRO-ED.Google Scholar
Hannay, H. J., & Lezak, M. D. (2004). The neuropsychological examination: Interpretation. In Lezak, M. D., Howieson, D. B., & Loring, D. W. (Eds.), Neuropsychological Assessment (4th ed.). New York: Oxford.Google Scholar
Harcourt Assessment. (2005). Wechsler individual achievement test (2nd ed.). San Antonio, TX: Author.Google Scholar
Heaton, R. K., Avitable, N., Grant, I., & Matthews, C. G. (1999). Further cross validation of regression based neuropsychological norms with an update for the Boston Naming Test. Journal of Clinical and Experimental Neuropsychology, 21, 572582.Google Scholar
Heaton, R. K., Taylor, M. J., & Manly, J. (2003). Demographic effects and the use of demographically corrected norms with the WAIS–III and WMS–III. In Tulsky, D. S., Saklofske, D. H., Chelune, G. J., Heaton, R. K., Ivnik, R. J., Bornstein, R., et al. (Eds.), Clinical interpretation of the WAIS–III and WMS–III (pp. 181–210) San Diego: Academic Press.Google Scholar
Holdnack, J. A., Drozdick, L. W., Weiss, L. A., & Iverson, G. L. (2013) WAIS-IV, WMS-IV, and ACS: Advanced Clinical Interpretation. San Diego, CA: Academic Press.Google Scholar
Holdnack, J. A., Lissner, D., Bowden, S. C., & McCarthy, K. A. L. (2004). Utilising the WAIS-III/WMS-III in clinical practice: Update of research and issues relevant to Australian normative research. Australian Psychologist, 39, 220227.Google Scholar
Horn, J. L. (1965). Fluid and crystallized intelligence: A factor analytic study of the structure among primary mental abilities. Unpublished doctoral dissertation, University of Illinois, Champaign.Google Scholar
Horn, J. L. (1968). Organization of abilities and the development of intelligence. Psychological Review, 75, 242259.Google Scholar
Horn, J. L. (1972). State, trait and change dimensions of intelligence: A critical experiment. British Journal of Educational Psychology, 42, 159185.Google Scholar
Horn, J. L., & Cattell, R. B. (1966). Refinement and test of the theory of fluid and crystallized intelligence. Journal of Educational Psychology, 57, 253270.Google Scholar
Howieson, D. B., Loring, D. W., & Hannay, H. J. (2004). Neurobehavioral variables and diagnostic issues. In Lezak, M. D., Howieson, D. B., & Loring, D. W., Neuropsychological assessment (4th ed.). New York: Oxford University Press.Google Scholar
Hresko, W., Schlieve, P., Herron, S., Swain, C., & Sherbenou, R. (2003). Comprehensive mathematical abilities test. Austin, TX: PRO-ED.Google Scholar
Hunt, M. S. (2008). A joint confirmatory factor analysis of the Kaufman Assessment Battery for Children, second edition, and the Woodcock-Johnson tests of cognitive abilities, third edition, with preschool children. Dissertation Abstracts International Section A: Humanities and Social Sciences, 68(11-A), 4605.Google Scholar
Hunter, J. E., & Schmidt, F. L. (1996). Intelligence and job performance: Economic and social implications. Psychology, Public Policy, and Law, 6, 447472.CrossRefGoogle Scholar
Jensen, A. R. (2000). TESTING: The dilemma of group differences. Psychology, Public Policy, and Law, 6, 121127. https://doi.org/10.1037/1076-8971.6.1.121Google Scholar
Kaufman, A. S. (2009). IQ testing 101. New York: Springer Publishing.Google Scholar
Kaufman, A. S., & Kaufman, N. L. (1983). Kaufman assessment battery for children. Circle Pines, MN: American Guidance Service.Google Scholar
Kaufman, A. S., & Kaufman, N. L. (1993). Kaufman adolescent and adult intelligence test. Circle Pines, MN: American Guidance Service.Google Scholar
Kaufman, A. S., & Kaufman, N. L. (2004a). Kaufman assessment battery for children, second edition (KABC-II) manual. Circle Pines, MN: American Guidance Service.Google Scholar
Kaufman, A. S., & Kaufman, N. L. (2004b). Kaufman test of educational achievement, second edition (KTEA-II) comprehensive form manual. Circle Pines, MN: American Guidance Service.Google Scholar
Kaufman, A. S., & Kaufman, N. L. (2004c). Kaufman brief intelligence test (2nd ed.). Bloomington, MN:Pearson.Google Scholar
Kaufman, A. S., & Kaufman, N. L. (2008). KABC-II batterie pour l’examen psychologique de l’enfant-deuxième édition. Montreuil: Éditions du Centre de Psychologie Appliquée.Google Scholar
Kaufman, A. S., & Kaufman, N. L. (2014). Kaufman test of educational achievement (3rd ed.). Bloomington, MN: NCS Pearson.Google Scholar
Kaufman, A. S., & Kaufman, N. L. (2018). Kaufman assessment battery for children, second edition, normative update. Bloomington, MN: NCS Pearson.Google Scholar
Kaufman, A. S., & Kaufman, N. L., Drozdick, L. W., & Morrison, J. (2018) Kaufman assessment battery for children, second edition, normative update manual supplement. Bloomington, MN: NCS Pearson.Google Scholar
Kaufman, A. S., Kaufman, N. L., Melchers, P., & Melchers, M. (2014). German-language adaptation of the Kaufman assessment battery for children (2nd ed.). Frankfurt: Pearson.Google Scholar
Kaufman, A. S., Kaufman, N. L., & Publication Committee of Japanese Version of the KABC-II. (2013). Japanese version of Kaufman assessment battery for children (2nd ed.). Tokyo: Maruzen.Google Scholar
Kaufman, A. S., & Lichtenberger, E. O. (2006). Assessing adolescent and adult intelligence (3rd ed.). Hoboken, NJ: John Wiley & Sons.Google Scholar
Kaufman, A. S., Lichtenberger, E. O., Fletcher-Janzen, E., & Kaufman, N. L. (2005). Essentials of KABC-II assessment. Hoboken, NJ: John Wiley & Sons.Google Scholar
Kaufman, A. S., Raiford, S. E., & Coalson, D. L. (2016). Intelligent testing with the WISC-V. Hoboken, NJ: John Wiley & Sons.Google Scholar
Kaufman, A. S., & Weiss, L. G. (2010). Guest editors’ introduction to the special issue of JPA on the Flynn effect. Journal of Psychoeducational Assessment, 28(5), 379381. https://doi.org/10.1177/0734282910373344Google Scholar
Keith, T. Z., Fine, J. G., Taub, G. E., Reynolds, M. R., & Kranzler, J. H. (2006). Higher order, multisample, confirmatory factor analysis of the Wechsler Intelligence Scale for Children – Fourth Edition: What does it measure? School Psychology Review, 35, 108127.Google Scholar
Keith, T. Z., Low, J. A., Reynolds, M. R., Patel, P. G., & Ridley, K. P. (2010). Higher-order factor structure of the Differential Ability Scales-II: Consistency across ages 4 to 17. Psychology in the Schools, 47, 676697.Google Scholar
Kellogg, C. E., & Morton, N. W. (2016). Beta (4th ed.). San Antonio, TX: Pearson.Google Scholar
Kendler, K. S., Ohlsson, H., Mezuk, B., Sundquist, J. O., & Sundquist, K. (2016). Observed cognitive performance and deviation from familial cognitive aptitude at age 16 years and ages 18 to 20 years and risk for schizophrenia and bipolar illness in a Swedish national sample. JAMA Psychiatry; 73, 465471. https://doi.org/10.1001/jamapsychiatry.2016.0053Google Scholar
Kirkwood, M. W. (2015). Validity testing in child and adolescent assessment: Evaluating exaggerating, feigning and noncredible effort. New York: Guilford Press.Google Scholar
Kirkwood, M. W., Hargrave, D. D., & Kirk, J. W. (2011). The value of the WISC-IV digit span subtest in noncredible performance during pediatric neuropsychological examinations. Archives of Clinical Neuropsychology, 26(5), 377385. https://doi.org/10.1093/arclin/acr040.Google Scholar
Larsen, L., Hartmann, P., & Nyborg, H. (2008). The stability of general intelligence from early adulthood to middle-age. Intelligence, 36, 2934.Google Scholar
Lezak, M. D., Howieson, D. B., Bigler, E. D., & Tranel, D. (2012). Neuropsychological assessment (5th ed.). New York: Oxford University Press.Google Scholar
Lezak, M. D., Howieson, D. B., & Loring, D. W. (2004). Neuropsychological assessment (4th ed.). New York: Oxford University Press.Google Scholar
Lichtenberger, E. O., & Kaufman, A. S. (2009). Essentials of WAIS-IV assessment, Vol. 50. John Wiley & Sons.Google Scholar
Luria, A. R. (1973). The working brain: An introduction to neuropsychology (trans. B. Haigh). London: Penguin Books.Google Scholar
Luria, A. R. (1980). Higher cortical functions in man (trans. B. Haigh, 2nd ed.). New York: Basic Books.Google Scholar
Malda, M., van de Vijver, F. J. R., Srinivasan, K., & Sukumar, P. (2010). Traveling with cognitive tests: Testing the validity of a KABC-II adaptation in India. Assessment, 17, 107115.Google Scholar
Manly, J. J. (2005). Advantages and disadvantages of separate norms for African Americans. The Clinical Neuropsychologist, 19, 270275. https://doi.org/10.1080/13854040590945346Google Scholar
Manly, J. J., & Echemendia, R. J. (2007). Race-specific norms: Using the model of hypertension to understand issues of race, culture, and education in neuropsychology. Archives of Clinical Neuropsychology, 22, 319325.Google Scholar
Markwardt, F. C. (1989). Peabody individual achievement test-revised. Circle Pines, MN: American Guidance Service.Google Scholar
Markwardt, F. C. (1997). Peabody individual achievement test-revised/normative update. Circle Pines, MN: American Guidance Service.Google Scholar
Mays, K. L., Kamphaus, R. W., & Reynolds, C. R. (2009) Applications of the Kaufman assessment battery for children, 2nd edition in neuropsychological assessment. In Reynolds, C. R. & Fletcher-Janzen, E. (Eds.), Handbook of clinical child neuropsychology (3rd ed., pp. 281296). Boston, MA: Springer.Google Scholar
McFadden, T. U. (1996). Creating language impairments in typically achieving children: The pitfalls of “normal” normative sampling. Language, Speech, and Hearing Services in Schools, 27, 39.CrossRefGoogle Scholar
McGrew, K. S. (1997). Analysis of the major intelligence batteries according to a proposed comprehensive Gf-Gc framework. In Flanagan, D. P., Genshaft, J. L., & Harrison, P. L. (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (pp. 151179). New York: Guilford.Google Scholar
McGrew, K. S., & Evans, J. J. (2004). Internal and external factorial extensions to the Cattell-Horn-Carroll (CHC) theory of cognitive abilities: A review of factor analytic research since Carroll’s seminal 1993 treatise. St. Joseph, MN: Institute for Applied Psychometrics.Google Scholar
Meehl, P. E., & Rosen, A. (1955). Antecedent probability and the efficiency of psychometric signs, patterns, or cutting scores. Psychological Bulletin, 52(3), 194216. https://doi.org/10.1037/h0048070.Google Scholar
Meyers, J. E., Zellinger, M. M., Kockler, T., Wagner, M., & Miller, R. M. (2013). A validated seven-subtest short form for the WAIS-IV. Applied Neuropsychology: Adult, 20, 249256.Google Scholar
Miller, D. C. (2013). Essentials of school neuropsychological assessment (2nd ed.). Hoboken, NJ: John Wiley & Sons.Google Scholar
Miller, D. I., Davidson, P. S. R., Schindler, D., & Meisser, C. (2013). Confirmatory factor analysis of the WAIS-IV and WMS-IV in older adults. Journal of Psychoeducational Assessment, 31, 375390.Google Scholar
Mitrushina, M., Boone, K. B., Razani, J., & D’Elia, L. F. (2005). Handbook of normative data for neuropsychological assessment (2nd ed.). New York: Oxford University Press.Google Scholar
Mittenburg, W., Patton, C., Canyock, E. M., & Condit, D. C. (2002). Base rates of malingering and symptom exaggeration. Journal of Clinical and Experimental Neuropsychology, 24, 10941102.Google Scholar
Morgan, K. E., Rothlisberg, B. A., McIntosh, D. E., & Hunt, M. S. (2009). Confirmatory factor analysis of the KABC-II in preschool children. Psychology in the Schools, 46(6), 515525. https://doi.org/10.1002/pits.20394Google Scholar
Naglieri, J. A. (2015). Naglieri nonverbal ability test (3rd ed.). Bloomington, MN: Pearson.Google Scholar
Naglieri, J. A., & Das, J. P. (1997). Cognitive assessment system. Itasca, IL: Riverside.Google Scholar
Naglieri, J. A., Das, J. P., & Goldstein, S. (2014a). Cognitive assessment system (2nd ed.). Itasca, IL: Riverside.Google Scholar
Naglieri, J. A., Das, J. P., & Goldstein, S. (2014b). Cognitive assessment system – second edition: Brief. Itasca, IL: Riverside.Google Scholar
Naglieri, J. A., Das, J. P., & Goldstein, S. (2014c). Cognitive assessment system – second edition: Rating scale. Itasca, IL: Riverside.Google Scholar
Niileksela, C. R., Reynolds, M. R., & Kaufman, A. S. (2013). An alternative Cattell-Horn-Carroll (CHC) factor structure of the WAIS-IV: Age invariance of an alternative model for ages 70–90. Psychological Assessment, 25, 391404.Google Scholar
Norfolk, P. A., Farner, R. L., Floyd, R. G., Woods, I. L., Hawkins, H. K., & Irby, S. M. (2014). Norm block sample sizes: A review of 17 individually administered intelligence tests. Journal of Psychoeducational Assessment, 33, 544554.Google Scholar
Norman, M. A., Evans, J. D., Miller, S. W., & Heaton, R. K. (2000). Demographically corrected norms for the California Verbal Learning Test. Journal of Clinical and Experimental Neuropsychology, 22, 8094.Google Scholar
Oakland, T., Douglas, S., & Kane, H. (2016). Top ten standardized tests used internationally with children and youth by school psychologists in 64 countries: A 24-year follow-up study. Journal of Psychoeducational Assessment, 34(2), 166176. https://doi.org/10.1177/0734282915595303Google Scholar
Pearson. (2009a). Wechsler individual achievement test (3rd ed.). San Antonio, TX: Author.Google Scholar
Pearson. (2009b). Advanced clinical solutions for WAIS-IV and WMS-IV. San Antonio, TX: Author.Google Scholar
Pena, E. D., Spaulding, T. J., & Plante, E. (2006). The composition of normative groups and diagnostic decision making: Shooting ourselves in the foot. American Journal of Speech-Language Pathology, 15, 247254.Google Scholar
Potvin, D. C. H., Keith, T. Z., Caemmerer, J. M., & Trundt, K. M. (2015). Confirmatory factor structure of the Kaufman Assessment Battery for Children-Second Edition With Preschool children: Too young for differentiation? Journal of Psychoeducational Assessment, 33(6), 522533. https://doi.org/10.1177/0734282914568538CrossRefGoogle Scholar
Rabin, L. A., Barr, W. B., & Burton, L. A. (2005). Assessment practices of clinical neuropsychologists in the United States and Canada: A survey of INS, NAN, and APA Division 40 members. Archives of Clinical Neuropsychology, 20(1), 3365. https://doi.org/10.1016/j.acn.2004.02.005Google Scholar
Rabin, L. A., Paolillo, E., & Barr, W. B. (2016). Stability in test-usage practices of clinical neuropsychologists in the United States and Canada over a 10-year period: A follow-up survey of INS and NAN members. Archives of Clinical Neuropsychology, 31, 206230. https://doi.org/10.1093/arclin/acw007Google Scholar
Raiford, S. E. (2017). Essentials of WISC-V integrated assessment. Hoboken, NJ: John Wiley & Sons.Google Scholar
Raiford, S. E., Drozdick, L. W., & Zhang, O. (2015). Q-interactive technical report 11: Q-Interactive special group studies: The WISC-V and children with autism spectrum disorder and accompanying language impairment or attention-deficit/hyperactivity disorder. Bloomington, MN: Pearson. www.helloq.com/research.htmlGoogle Scholar
Raiford, S. E., Drozdick, L. W., & Zhang, O. (2016). Q-interactive technical report 13: Q-interactive Special group studies: The WISC-V and children with specific learning disorders in reading and mathematics. Bloomington, MN: Pearson. www.helloq.com/research.htmlGoogle Scholar
Raiford, S. E., Holdnack, J., Drozdick, L. W., & Zhang, O. (2014). Q-interactive technical report 9: Q-interactive Special Group Studies: The WISC-V and children with intellectual giftedness and intellectual disabilities. Bloomington, MN: Pearson. www.helloq.com/research.htmlGoogle Scholar
Raiford, S. E., Zhang, O., Drozdick, L. W., Getz, K., Wahlstorm, D., Gabel, A., Holdnack, J., & Daniel, M. H. (2016). Q-Interactive technical report 12: WISC-V coding and symbol search in digital format: Reliability, validity, special group studies, and interpretation. Bloomington, MN: Pearson. www.helloq.com/research.htmlGoogle Scholar
Randolph, C. (1998). Repeated battery for the assessment of neuropsychological status. San Antonio, TX: Pearson.Google Scholar
Raven, J. C. (2018). Ravens 2 progressive matrices: Clinical edition. Bloomington, MN: Pearson.Google Scholar
Reynolds, C. R., & Fletcher-Janzen, E. (Eds.). (2007). Encyclopedia of special education: A reference for the education of children, adolescents, and adults with disabilities and other exceptional individuals, Vol. 3 (3rd ed.). Hoboken, NJ: John Wiley & Sons.Google Scholar
Reynolds, C. R., & Kamphaus, R. W. (2015). Reynolds intellectual assessment scales (2nd ed.). Lutz, FL: Psychological Assessment Resources.Google Scholar
Reynolds, M. R., Keith, T. Z., Fine, J. G., Fisher, M. E., & Low, J. A. (2007). Confirmatory factor structure of the Kaufman Assessment Battery for Children—Second Edition: Consistency with Cattell–Horn–Carroll theory. School Psychology Quarterly, 22(4), 511539. https://doi.org/10.1037/1045-3830.22.4.511Google Scholar
Reynolds, M. R., Ridley, K. P., & Patel, P. G. (2008). Sex differences in latent general and broad cognitive abilities for children and youth: Evidence from higher-order MG-MACS and MIMIC Models. Intelligence, 26, 236260.CrossRefGoogle Scholar
Rohling, M. L., Miller, R. M., Axelrod, B. N., Wall, J. R., Lee, A. J. H., & Kinikini, D. T. (2015). Is co-norming required? Archives of Clinical Neuropsychology, 30, 611633.Google Scholar
Roid, G. H. (2003). Stanford-Binet intelligence scales – fifth edition: Technical manual. Itasca, IL: Riverside.Google Scholar
Rowe, E. W., Kingsley, J. M., & Thompson, D. F. (2010). Predictive ability of the general ability index (GAI) versus the full scale IQ among gifted referrals. School Psychology Quarterly, 25(2), 119128. https://doi.org/10.1037/a0020148Google Scholar
Rushton, J. P., & Rushton, E. W. (2003). Brain size, IQ, and racial group differences: Evidence from musculoskeletal traits. Intelligence, 31, 139155. https://doi.org/10.1016/S0160-2896(02)00137-X.Google Scholar
Russell, E. W., Russell, S. L., & Hill, B. D. (2005). The fundamental psychometric status of neuropsychological batteries. Archives of Clinical Neuropsychology, 20(6), 785794. https://doi.org/10.1016/j.acn.2005.05.001.Google Scholar
Sameroff, A. J., Seifer, R., Baldwin, A., & Baldwin, C. (1993). Stability of intelligence from preschool to adolescence: The influence of social and family risk factors. Child Development, 64, 8097.Google Scholar
Sattler, J. M. (2008). Assessment of children: Cognitive foundations (5th ed.). La Mesa, CA: Author.Google Scholar
Sattler, J. M. (2016). Assessment of children: WISC-V and WPPSI-IV. La Mesa, CA: Author.Google Scholar
Sattler, J. M., & Ryan, J. J. (2009). Assessment with the WAIS-IV. La Mesa, CA: Author.Google Scholar
Scheiber, C. (2016a). Do the Kaufman tests of cognitive ability and academic achievement display construct bias across a representative sample of Black, Hispanic, and Caucasian school-age children in grades 1 through 12? Psychological Assessment, 28(8), 942952. https://doi.org/10.1037/pas0000236Google Scholar
Scheiber, C. (2016b). Is the Cattell-Horn-Carroll-based factor structure of the Wechsler Intelligence Scale for Children-fifth edition (WISC-V) construct invariant for a representative sample of African-American, Hispanic, and Caucasian male and female students ages 6 to 16 years? Journal of Pediatric Neuropsychology, 2(3–4), 7988. https://doi.org/10.1007/s40817-016-0019-7Google Scholar
Scheiber, C. & Kaufman, A. S. (2015). Which of the three KABC-II global scores is the least biased? Journal of Pediatric Neuropsychology, 1, 2135.Google Scholar
Schmidt, F. L., & Hunter, J. E. (2004). General mental ability in the work place. Journal of Personality and Social Psychology, 86, 162173.Google Scholar
Schneider, W. J., & McGrew, K. S. (2012). The Cattell-Horn-Carroll model of intelligence. In Flanagan, D. P. & Harrison, P. L. (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (3rd ed., pp. 99114). New York: Guilford Press.Google Scholar
Schneider, W. J., & McGrew, K. S. (2018). The Cattell-Horn-Carroll theory of cognitive abilities. In Flanagan, D. P. & McDonough, E. M (Eds.), Contemporary intellectual assessment: Theories, tests and issues (4th ed., pp. 73163). New York: Guilford Press.Google Scholar
Schrank, F. A., Mather, N., & McGrew, K. S. (2014a). Woodcock-Johnson-IV tests of achievement. Rolling Meadows, IL: Riverside.Google Scholar
Schrank, F. A., Mather, N., & McGrew, K. S. (2014b). Woodcock-Johnson-IV tests of oral language. Rolling Meadows, IL: Riverside.Google Scholar
Schrank, F. A., McGrew, K. S., & Mather, N. (2014). Woodcock-Johnson–IV tests of cognitive abilities. Rolling Meadows, IL: Riverside.Google Scholar
Schwartz, H. (2014). Following reports of forced sterilization of female inmates, California passes ban. Washington Post, 26 September. www.washingtonpost.com/blogs/govbeat/wp/2014/09/26/following-reports-of-forced-sterilization-of-female-prison-inmates-california-passes-ban/?utm_term=.0085bcae1945Google Scholar
Sotelo-Dynega, M., & Dixon, S. G. (2014). Cognitive assessment practices: A survey of school psychologists. Psychology in the Schools, 51(10), 10311045.Google Scholar
Spearman, C. (1927). The abilities of man, their nature, and measurement. New York: Macmillan.Google Scholar
Staffaroni, A. M., Eng, M. E., Moses, J. A. Jr., Zeiner, H. K., & Wickham, R. E. (2018). Four- and five-factor models of the WAIS-IV in a clinical sample: Variations in indicator configuration and factor correlational structure. Psychological Assessment, 30, 693706.CrossRefGoogle Scholar
Steele, C. M., & Aronson, J. (1995). Stereotype threat and the intellectual test performance of African Americans. Journal of personality and social psychology, 69(5), 797811.Google Scholar
Sternberg, R. J. (1999). A triarchic approach to the understanding and assessment of intelligence in multicultural populations. Journal of School Psychology, 37(2), 145159. https://doi.org/10.1016/S0022-4405(98)00029-6.Google Scholar
Sternberg, R. J., & Detterman, D. K. (Eds.). (1986). What is intelligence? Norwood, NJ: Ablex.Google Scholar
Strauss, E., Sherman, E. M. S., & Spreen, O. (2006). A compendium of neuropsychological tests: Administration, norms, and commentary (3rd ed.). New York: Oxford University Press.Google Scholar
Taub, G. E., & McGrew, K. S. (2004). A confirmatory factor analysis of Cattell-Horn-Carroll theory and cross-age invariance of the Woodcock-Johnson Tests of Cognitive Abilities III. School Psychology Quarterly, 19(1), 7287.Google Scholar
Terman, L. M. (1916). The measurement of intelligence: An explanation of and a complete guide for the use of the Stanford revision and extension of the Binet-Simon Intelligence Scale. Oxford: Houghton Mifflin.Google Scholar
Thorndike, R. L., Hagen, E. P., & Sattler, J. M. (1986). Stanford-Binet intelligence scale (4th ed.). Chicago: Riverside.Google Scholar
Tombaugh, T. N. (1997). The Test of Memory Malingering (TOMM): Normative data from cognitively intact and cognitively impaired individuals. Psychological Assessment, 9, 260268.Google Scholar
Turkheimer, E., Haley, A., Waldron, M., D’Onofrio, B., & Gottesman, I. I. (2003). Socioeconomic status modifies heritability of IQ in young children. Psychological Science, 14 (6), 623628. https://doi.org/10.1046/j.0956-7976.2003.psci_1475.xGoogle Scholar
Vig, S., & Sanders, M. (2007). Cognitive assessment. In Brassard, M. R., & Boehm, A. E. (Eds.), Preschool assessment: Principles and practices (pp. 383419). New York: Guilford Press.Google Scholar
Visser, L., Ruiter, S. A., van der Meulen, B. F., Ruijssenaars, W. A., & Timmerman, M. E. (2012). A review of standardized developmental assessment instruments for young children and their applicability for children with special needs. Journal of Cognitive Education and Psychology, 11(2), 102127.Google Scholar
Ward, L. C., Bergman, M. A., & Hebert, K. R. (2012). WAIS-IV subtest covariance structure: Conceptual and statistical considerations. Psychological Assessment, 24(2), 328340. https://doi.org/10.1037/a0025614Google Scholar
Watkins, M. W., & Beaujean, A. A. (2014). Bifactor structure of the Wechsler Preschool and Primary Scale of Intelligence – Fourth Edition. School Psychology Quarterly, 29, 5263.Google Scholar
Wechsler, D. (1955). Wechsler adult intelligence scale. New York: Psychological Corporation.Google Scholar
Wechsler, D. (2004). Wechsler intelligence scale for children: Spanish (4th ed.). Bloomington, MN: NCS Pearson.Google Scholar
Wechsler, D. (2008). Wechsler adult intelligence scale (4th ed.). Bloomington, MN: NCS Pearson.Google Scholar
Wechsler, D. (2009). Wechsler memory scale (4th ed.). Bloomington, MN: NCS Pearson, Inc.Google Scholar
Wechsler, D. (2011). Wechsler abbreviated scale of intelligence (2nd ed.). Bloomington, MN: NCS Pearson.Google Scholar
Wechsler, D. (2012). Wechsler preschool and primary scale of intelligence (4th ed.). Bloomington, MN: NCS Pearson.Google Scholar
Wechsler, D. (2014). Wechsler intelligence scale for children (5th ed.). Bloomington, MN: NCS Pearson.Google Scholar
Wechsler, D. (2017). Wechsler intelligence scale for children: Spanish (5th ed.). Bloomington, MN: NCS Pearson.Google Scholar
Wechsler, D., & Kaplan, E. (2015). Wechsler intelligence scale for children: Integrated (5th ed.). Bloomington, MN: NCS Pearson.Google Scholar
Weiderholt, J. L., & Bryant, B. R. (2012). Gray oral reading tests (5th ed.). Austin, TX: PRO-ED.Google Scholar
Weiss, L. G., Keith, T. Z., Zhu, J., & Chen, H. (2013a). WAIS-IV clinical validation of the four- and five factor interpretive approaches [special edition]. Journal of Psychoeducational Assessment, 31(2), 94113. https://doi.org/10.1177/0734282913478030CrossRefGoogle Scholar
Weiss, L. G., Keith, T. Z., Zhu, J., & Chen, H. (2013b). WISC-IV and clinical validation of the four- and five-factor interpretive approaches [special edition]. Journal of Psychoeducational Assessment, 31(2), 114131. https://doi.org/10.1177/0734282913478032Google Scholar
Weiss, L. G., Saklofske, D. H., Prifitera, A., & Holdnack, J. A. (2006). WISC-IV: Advanced clinical interpretation. Burlington, MA: Academic Press.Google Scholar
Wilkinson, G. S., & Robertson, G. J. (2006). WRAT4 wide range achievement test (4th ed.). Lutz, FL: Psychological Assessment Resources.Google Scholar
Wong, T. M., Strickland, T. L., Fletcher-Janzen, E., Ardilla, A., & Reynolds, C. R. (2000). Theoretical and practical issues in the neuropsychological treatment and assessment of culturally dissimilar patients. In Fletcher-Janzen, E., Strickland, T.L., & Reynolds, C.R. (Eds.) Handbook of cross-cultural neuropsychology (pp. 318). New York: Springer Science & Business Media.Google Scholar
Woodcock, R. W., & Johnson, M. B. (1977). Woodcock-Johnson psycho-educational battery. Rolling Meadows, IL: Riverside.Google Scholar
Woodcock, R. W., McGrew, K. S., & Mather, N. (2001a). Woodcock-Johnson III: Tests of cognitive abilities. Chicago: Riverside Publishing.Google Scholar
Woodcock, R. W., McGrew, K. S., & Mather, N. (2001b). Woodcock-Johnson III: Tests of achievement instrument. Itasca, IL: Riverside Publishing.Google Scholar

References

Abedi, J. (2002). Standardized achievement tests and English language learners: Psychometrics issues. Educational Assessment, 8, 231257.Google Scholar
Abedi, J., Hofstetter, C, Baker, E., & Lord, C. (2001). NAEP math performance test accommodations: Interactions with student language background (CSE Technical Report 536). Los Angeles: National Center for Research on Evaluation, Standards, and Student Testing.Google Scholar
Abedi, J., & Leon, S. (1999). Impact of students’ language background on content-based performance: Analyses of extant data. Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing.Google Scholar
Abedi, J., Leon, S., & Mirocha, J. (2003). Impact of student language background on content-based performance: Analyses of extant data (CSE Technical Report 603). Los Angeles: National Center for Research on Evaluation, Standards, and Student Testing.Google Scholar
Adelman, H. S., Lauber, B. A., Nelson, P., & Smith, D. C. (1989). Toward a procedure for minimizing and detecting false positive diagnoses of learning disability. Journal of Learning Disabilities, 22, 234244.Google Scholar
American Psychiatric Association. (2013). Desk reference to the diagnostic criteria from DSM-5. Washington, DC: American Psychiatric Publishing.Google Scholar
APA (American Psychological Association). (2004). Code of fair testing practices in education. Washington, DC: Joint Committee on Testing Practices.Google Scholar
Babad, E. Y., Inbar, J., & Rosenthal, R. (1982). Pygmalion, Galatea, and the Golem: Investigations of biased and unbiased teachers. Journal of Educational Psychology, 74, 459474.Google Scholar
Banks, K. (2006). A comprehensive framework for evaluating hypotheses about cultural bias in educational testing. Applied Measurement in Education, 19, 115132.Google Scholar
Banks, K. (2012). Are inferential reading items more susceptible to cultural bias than literal reading items? Applied Measurement in Education, 25, 220245.Google Scholar
Breaux, K. C. (2009). Wechsler individual achievement test: Technical manual (3rd ed.). San Antonio, TX: Pearson.Google Scholar
Breaux, K. C., Bray, M. A., Root, M. M., & Kaufman, A. S. (Eds.) (2017). Special issue on studies of students’ errors in reading, writing, math, and oral language. Journal of Psychoeducational Assessment, 35. https://doi.org/10.1177/0734282916669656Google Scholar
Brooks, B. L., Holdnack, J. A., & Iverson, G. L. (2011). Advanced clinical interpretation of the WAIS-IV and WMS-IV: Prevalence of low scores varies by level of intelligence and years of education. Assessment, 18, 156167.Google Scholar
Brown, J. I., Fishco, V. V., & Hanna, G. (1993). Nelson-Denny reading test (forms G and H). Austin, TX: PRO-ED.Google Scholar
Coleman, C., Lindstrom, J., Nelson, J., Lindstrom, W., & Gregg, K. N. (2010). Passageless comprehension on the Nelson Denny Reading Test: Well above chance for university students. Journal of Learning Disabilities, 43, 244249.Google Scholar
Connolly, A. (2007). Key Math-3 Diagnostic Assessment. Austin, TX: Pearson.Google Scholar
Cruickshank, W. M. (1977). Least-restrictive placement: Administrative wishful thinking. Journal of Learning Disabilities, 10, 193194.Google Scholar
CTB/McGraw-Hill. (1999). Teacher’s guide to Terra Nova: CTBS battery, survey, and plus editions, multiple assessments. Monterey, CA: Author.Google Scholar
Davis, L. B., & Fuchs, L. S. (1995). “Will CBM help me learn?”: Students’ perception of the benefits of curriculum-based measurement. Education and Treatment of Children, 18(1), 1932.Google Scholar
Deeney, T. A., & Shim, M. K. (2016). Teachers’ and students’ views of reading fluency: Issues of consequential validity in adopting one-minute reading fluency assessments. Assessment for Effective Instruction, 41(2), 109126.Google Scholar
DeRight, J., & Carone, D. A. (2015). Assessment of effort in children: A systematic review. Child Neuropsychology, 21, 124.Google Scholar
Ford, J. W., Missall, K. N., Hosp, J. L., & Kuhle, J. L. (2017). Examining oral passage reading rate across three curriculum-based measurement tools for predicting grade-level proficiency. School Psychology Review, 46, 363378.Google Scholar
Fuchs, L. S. (2016). Curriculum based measurement as the emerging alternative: Three decades later. Learning Disabilities Research and Practice, 32, 57.Google Scholar
Fuchs, L. S., & Fuchs, D. (2002). Curriculum-based measurement: Describing competence, enhancing outcomes, evaluating treatment effects, and identifying treatment nonresponders. Peabody Journal of Education, 77(2), 6484.Google Scholar
Fuchs, L. S., Fuchs, D., Hosp, M., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5, 239256.Google Scholar
Gardner, E. (1989). Five common misuses of tests. ERIC Digest No. 108. Washington, DC: ERIC Clearinghouse on Tests Measurement and Evaluation.Google Scholar
Greiff, S., Wüstenberg, S., Holt, D. V., Goldhammer, F., & Funke, J. (2013). Computer-based assessment of complex problem solving: Concept, implementation, and application. Educational Technology Research and Development, 61, 407421.Google Scholar
Hammill, D. D., & Larsen, S. C. (2009). Test of written language (4th ed.). Austin, TX: PRO-ED.Google Scholar
Harrison, A. G., & Edwards, M. J. (2010). Symptom exaggeration in post-secondary students: Preliminary base rates in a Canadian sample. Applied Neuropsychology, 17, 135143.Google Scholar
Harrison, A. G., Edwards, M. J., Armstrong, I., & Parker, K. C. H. (2010). An investigation of methods to detect feigned reading disabilities. Archives of Clinical Neuropsychology, 25, 8998.Google Scholar
Harrison, A. G., Edwards, M. J., & Parker, K. C. H. (2008). Identifying students feigning dyslexia: Preliminary findings and strategies for detection. Dyslexia, 14, 228246.Google Scholar
Hasbrouck, J., & Tindal, G. (2017 ). An update to compiled ORF norms (Technical Report No. 1702). Eugene, OR: Behavioral Research and Teaching, University of Oregon.Google Scholar
Hinnant, J. B., O’Brien, M., & Ghazarian, S. R. (2009). The longitudinal relations of teacher expectations to achievement in the early school years. Journal of Educational Psychology, 101, 662670.Google Scholar
Hosp, M. K., Hosp, J. L., & Howell, K. W. (2016). The ABCs of CBM: A practical guide to curriculum-based measurement (2nd ed.). New York: Guilford Press.Google Scholar
Hosp, J. L., & Suchey, N. (2014). Reading assessment: Reading fluency, reading fluently, and comprehension – Commentary on the special topic. School Psychology Review, 43, 5968.Google Scholar
Huff, K. L., & Sireci, S. G. (2001). Validity issues in computer-based testing. Educational Measurement: Issues and Practice, 20, 1625.Google Scholar
Jones, E. D., Southern, W. T., & Brigham, F. J. (1998). Curriculum-based assessment: Testing what is taught and teaching what is tested. Intervention in School and Clinic, 33, 239249.Google Scholar
Joseph, L. M., Wargelin, L., & Ayoub, S. (2016). Preparing school psychologists to effectively provide services to students with dyslexia. Perspectives on Language and Literacy, 42(4), 1523.Google Scholar
Kaufman, A. S., & Kaufman, N. L. (2014). Kaufman test of educational achievement (3rd ed.). San Antonio, TX: Pearson.Google Scholar
Kaufman, A. S., Kaufman, N. L., & Breaux, K. C. (2014). Technical and interpretive manual. Kaufman Test of Educational Achievement – Third Edition (KTEA-3) Comprehensive Form. Bloomington, MN: NCS Pearson.Google Scholar
Kaufman, A. S., Raiford, S. E., & Coalson, D. L. (2016). Intelligent testing with the WISC-V. Hoboken, NJ: John Wiley & Sons.Google Scholar
Keenan, J. M., & Betjemann, R. S. (2006). Comprehending the Gray Oral Reading Test without reading it: Why comprehension tests should not include passage-independent items. Scientific Studies of Reading, 10, 363380.Google Scholar
Keenan, J. M., Betjemann, R. S., & Olson, R. K. (2008). Reading comprehension tests vary in the skills that they assess: Differential dependence on decoding and oral comprehension. Scientific Studies of Reading, 12, 281300.Google Scholar
Keenan, J. M., & Meenan, C. E. (2014). Test differences in diagnosing reading comprehension deficits. Journal of Learning Disabilities, 47, 125135.Google Scholar
Kellow, J. T., & Jones, B. D. (2008). The effects of stereotypes on the achievement gap: Reexamining the academic performance of African American high school students. Journal of Black Psychology, 34(1), 94120.Google Scholar
Kendeou, P., Papadopoulos, T. C., & Spanoudis, G. (2012). Processing demands of reading comprehension tests in young readers. Learning and Instruction, 22, 354367.Google Scholar
Kieffer, M. J., Lesaux, N. K., Rivera, M., & Francis, D. J. (2009). Accommodations for English language learners taking large-scale assessments: A meta-analysis on effectiveness and validity. Review of Educational Research, 79, 11681201.Google Scholar
Kirkwood, M. W., Kirk, J. W., Blaha, R. Z., Wilson, P. (2010). Noncredible effort during pediatric neuropsychological exam: A case series and literature review. Child Neuropsychology, 16, 604618.Google Scholar
Leslie, L., & Caldwell, J. (2001). Qualitative reading inventory–3. New York: Addison Wesley Longman.Google Scholar
Linn, R. L. (2000). Assessments and accountability. Educational Researcher, 29, 416.Google Scholar
Lu, P. H., & Boone, K. B. (2002). Suspect cognitive symptoms in a 9-year old child: Malingering by proxy? The Clinical Neuropsychologist, 16, 9096.Google Scholar
Markwardt, F. C. (1997). Peabody individual achievement test – revised (normative update). Bloomington, MN: Pearson Assessments.Google Scholar
Martiniello, M. (2009). Linguistic complexity, schematic representations, and differential item functioning for English language learners in math tests. Educational Assessment, 14, 160179.Google Scholar
McGrew, K. S., LaForte, E. M., & Schrank, F. A. (2014). Woodcock-Johnson IV: Technical manual [CD]. Itasca, IL: Houghton Mifflin Harcourt.Google Scholar
Molnar, M. (2017). Market is booming for digital formative assessments. Education Week, May 24. http://edweek.org/ew/articles/2017/05/24/market-is-booming-for-digital-formative-assessments.htmlGoogle Scholar
Monroe, M. (1932). Children who cannot read. Chicago. IL: University of Chicago Press.Google Scholar
NASP (National Association of School Psychologists). (2016). School psychologists’ involvement in assessment. Bethesda, MD: Author.Google Scholar
Nguyen, H. H. D., & Ryan, A. M. (2008). Does stereotype threat affect test performance of minorities and women? A meta-analysis of experimental evidence. Journal of Applied Psychology, 93, 13141334.Google Scholar
Rome, H. P., Swenson, W. M., Mataya, P., McCarthy, C. E., Pearson, J. S., Keating, F. R., & Hathaway, S. R. (1962). Symposium on automation techniques in personality assessment. Proceedings of the Staff Meetings of the Mayo Clinic, 37, 6182.Google Scholar
Sattler, J. M. (2008). Assessment of children: Cognitive foundations. CA: Author.Google Scholar
Schneider, J. W., Lichtenberger, E. O., Mather, N., & Kaufman, N. L. (2018). Essentials of assessment report writing. Hoboken, NJ: John Wiley & Sons.Google Scholar
Schrank, F. A., Mather, N., & McGrew, K. S. (2014a). Woodcock-Johnson IV tests of achievement. Itasca, IL: Houghton Mifflin Harcourt.Google Scholar
Schrank, F. A., Mather, N., & McGrew, K. S. (2014b). Woodcock-Johnson IV tests of oral language. Itasca, IL: Houghton Mifflin Harcourt.Google Scholar
Schrank, F. A., McGrew, K. S., & Mather, N. (2014a). Woodcock-Johnson IV. Itasca, IL: Houghton Mifflin Harcourt.Google Scholar
Schrank, F. A., McGrew, K. S., & Mather, N. (2014b). Woodcock-Johnson IV tests of cognitive abilities. Itasca, IL: Houghton Mifflin Harcourt.Google Scholar
Shinn, M. R., Good, R. H., Knutson, N., & Tilly, D. W. (1992). Curriculum-based measurement of oral reading fluency: A confirmatory analysis of its relation to reading. School Psychology Review, 21, 5, 459479.Google Scholar
Shute, V. J., Leighton, J. P., Jang, E. E., & Chu, M. W. (2016). Advances in the science of assessment. Educational Assessment, 21(1), 3459.Google Scholar
Shute, V. J., & Rahimi, S. (2017). Review of computer‐based assessment for learning in elementary and secondary education. Journal of Computer Assisted Learning, 33(1), 119.Google Scholar
Singleton, C. H. (2001). Computer-based assessment in education. Educational and Child Psychology, 18(3), 5874.Google Scholar
Spencer, S. J., Steele, C. M., & Quinn, D. M. (1999). Stereotype threat and women’s math performance. Journal of Experimental Social Psychology, 35, 428.Google Scholar
Steele, C. M., & Aronson, J. (1995). Stereotype threat and the intellectual test performance of African Americans. Journal of Personality and Social Psychology, 69, 797811.Google Scholar
Sullivan, B. K., May, K., & Galbally, L. (2007). Symptom exaggeration by college adults in Attention-Deficit Hyperactivity Disorder and Learning Disorder assessments. Applied Neuropsychology, 14, 189207.Google Scholar
Thurlow, M., Lazarus, S., & Christensen, L. (2013). Accommodations for assessment. In Lloyd, J., Landrum, T., Cook, B., & Tankersley, M. (Eds.)., Research-based approaches for assessment (pp. 94110). Upper Saddle River, NJ: Pearson.Google Scholar
Valencia, S. W., Smith, A. T., Reece, A. M., Li, M., Wixson, K. K., & Newman, H. (2010). Oral reading fluency assessment: Issues of construct, criterion, and consequential validity. Reading Research Quarterly, 45, 270291.Google Scholar
Van den Bergh, L., Denessen, E., Hornstra, L., Voeten, M., & Holland, R. W. (2010). The implicit prejudiced attitudes of teachers: Relations to teacher expectations and the ethnic achievement gap. American Educational Research Journal, 47, 497527.Google Scholar
VanDerHeyden, A. M., Witt, J. C., & Gilbertson, D. (2007). A multi-year evaluation of the effects of a response to intervention (RTI) model on identification of children for special education. Journal of School Psychology, 45, 225256. https://doi.org/10.1016/j.jsp.2006.11.004Google Scholar
Van Norman, E. R., Nelson, P. M., & Parker, D. C. (2018). A comparison of nonsense-word fluency and curriculum-based measurement of reading to measure response to phonics instruction. School Psychology Quarterly, 33, 573581. https://doi.org/10.1037/spq0000237Google Scholar
Wechsler, D. (2009). Wechsler individual achievement test (3rd ed.). San Antonio, TX: Psychological Corporation.Google Scholar
Wechsler, D. (2014). Wechsler intelligence scale for children (5th ed.). San Antonio, TX: Psychological Corporation.Google Scholar
Wei, H., & Lin, J. (2015). Using out-of-level items in computerized adaptive testing. International Journal of Testing, 15, 5070.Google Scholar
Wiederholt, J. L., & Bryant, B. R. (2001). Gray oral reading test (4th ed.). Austin, TX: PRO-ED.Google Scholar
Wiederholt, J. L., & Bryant, B. R. (2012). Gray oral reading test (5th ed.). Austin, TX: PRO-ED.Google Scholar
Willis, J. (2015). The historical role and best practice in identifying Specific Learning Disabilities. Paper presented at the New York Association of School Psychologists annual conference. Verona, NY, October.Google Scholar
Woodcock, R. W. (2011). Woodcock reading mastery test (3rd ed.). San Antonio, TX: Pearson.Google Scholar
Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Woodcock–Johnson III tests of achievement. Itasca, IL: Riverside.Google Scholar

References

Association for Assessment in Counseling. (2003). Standards for multicultural assessment. http://aac.ncat.edu/Resources/documents/STANDARDS%20FOR%20MULTICULTURAL%20ASSESSMENT%20FINAL.pdfGoogle Scholar
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84, 191215.Google Scholar
Betz, N. E., Borgen, F. H., & Harmon, L. (2005). Skills confidence inventory manual (revised ed.). Palo Alto, CA: Consulting Psychologists Press.Google Scholar
Betz, N. E., Borgen, F. H., Rottinghaus, P., Paulsen, A., Halper, C. R., & Harmon, L. W. (2003). The expanded Skills Confidence Inventory: Measuring basic dimensions of vocational activity. Journal of Vocational Behavior, 62, 76100.Google Scholar
Betz, N. E., & Taylor, K. M. (2012). Manual for the career decision self-efficacy Scale and CDSE-short form. Menlo Park, CA: Mindgarden.Google Scholar
Bingham, R. P., & Ward, C. M. (1994). Career counseling with ethnic minority women. In Walsh, W. B. & Osipow, S. (Eds.), Career counseling with women (pp. 165195). Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
Blustein, D. L. (Ed.). (2013). The Oxford handbook of the psychology of working. New York: Oxford University Press.Google Scholar
Blustein, D. L. (2019). The importance of work in an age of uncertainty: The eroding work experience in America. New York: Oxford University Press.Google Scholar
Brown, S. D., & Hirschi, A. (2013). Personality, career development, and occupational attainment. In Brown, S. D. & Lent, R. W. (Eds.), Career development and counseling: Putting theory and research to work (2nd ed., pp. 299328). New York: Wiley.Google Scholar
Chope, R. C. (2015). Card sorts, sentence completions, and other qualitative assessments. In Hartung, P. J., Savickas, M. L., & Walsh, W. B. (Eds.), APA handbook of career intervention, Vol. 2: Applications (pp. 7184). Washington, DC: American Psychological Association. doi:10.1037/14439-006Google Scholar
Crites, J. O. (1973). The career maturity inventory. Monterey, CA: CTB/McGraw-Hill.Google Scholar
Dawis, R. V. (1992). The individual differences tradition in counseling psychology. Journal of Counseling Psychology, 39, 719.Google Scholar
Dawis, R. V., & Lofquist, L. H. (1984). A psychological theory of work adjustment. Minneapolis: University of Minnesota Press.Google Scholar
DeBell, C. (2006). What all applied psychologists need to know about the world of work. Professional Psychology: Research and Practice, 37, 325333.Google Scholar
Donnay, D. A. C., & Borgen, F. H. (1999). The incremental validity of vocational self-efficacy: an examination of interest, self-efficacy, and occupation. Journal of Counseling Psychology, 46(4), 432447.Google Scholar
Donnay, D. A. C, Morris, M., Schaubhut, N., & Thompson, R. (2005). Strong Interest Inventory manual: Research, development, and strategies for interpretation. Palo Alto, CA: Consulting Psychologists Press.Google Scholar
Duckworth, J. (1990). The counseling approach to the use of testing. The Counseling Psychologist, 18, 198204.Google Scholar
Flores, L. Y., Berkel, L. A., Nilsson, J. E., Ojeda, L., Jordan, S. E., Lynn, G. L., & Leal, V. M. (2006). Racial/ethnic minority vocational research: A content and trend analysis across 36 years. The Career Development Quarterly, 55, 221.Google Scholar
Fouad, N. A., & Kantamneni, N. (2008). Contextual factors in vocational psychology: Intersections of individual, group, and societal dimensions. In Brown, S. D. & Lent, R. W. (Eds.), Handbook of counseling psychology (4th ed., pp. 408425). Hoboken, NJ: John Wiley.Google Scholar
Fouad, N. A., & Mohler, C. E. (2004). Cultural validity of Holland’s theory and the Strong Interest Inventory for five racial/ethnic groups. Journal of Career Assessment, 12(4), 432439. doi:10.1177/1069072704267736Google Scholar
Gay, E. G., Weiss, D. J., Hendel, D. D., Dawis, R. V., & Lofquist, L. H. (1971). Manual for the Minnesota Importance Questionnaire. http://vpr.psych.umn.edu/instruments/miq-minnesota-importance-questionnaireGoogle Scholar
Glavin, K. (2015). Measuring and assessing career maturity and adaptability. In Hartung, P. J., Savickas, M. L., & Walsh, W. B. (Eds.), APA handbook of career intervention, Vol. 2: Applications (pp. 183192). Washington, DC: American Psychological Association.Google Scholar
Goldfinger, K. B. (2019). Psychological testing in everyday life: History, science, and practice. Thousand Oaks, CA: SAGE.Google Scholar
Hansen, J. C. (2005). Assessment of interests. In Brown, S. D. & Lent, R. W. (Eds.), Career development and counseling: Putting theory and research to work (pp. 281304). New York: Wiley.Google Scholar
Hansen, J. C. (2013). Nature, importance, and assessment of interests. In Brown, S. D. & Lent, R. W. (Eds.), Career development and counseling: Putting theory and research to work (2nd ed., pp. 387416). New York: Wiley.Google Scholar
Hansen, J. C., & Dik, B. J. (2005). Evidence of 12-year predictive and concurrent validity for SII Occupational Scale scores. Journal of Vocational Behavior, 67, 365378.Google Scholar
Hansen, J. C., & Swanson, J. L. (1983). The effect of stability of interests on the predictive and concurrent validity of the SCII for college majors. Journal of Counseling Psychology, 30, 194201.Google Scholar
Harmon, L. W., Hansen, J. C., Borgen, F. H., & Hammer, A. C. (2005). Strong interest inventory: Applications and technical guide. Palo Alto, CA: Consulting Psychologists Press.Google Scholar
Hartung, P. J. (2005). Integrated career assessment and counseling: Mindsets, models, and methods. In Walsh, W. B. and Savickas, M. L. (Eds.), Handbook of vocational psychology (3rd ed., pp. 371395). Mahwah, NJ: Lawrence Erlbaum.Google Scholar
Haverkamp, B. E. (2013). Education and training in assessment for professional psychology: Engaging the “reluctant student.” In Geisinger, K. F. (Ed.), APA handbook of testing and assessment in psychology, Vol. 2: Testing and assessment in clinical and counseling psychology (pp. 6382). Washington, DC: American Psychological Association.Google Scholar
Holland, J. L. (1959). A theory of vocational choice. Journal of Counseling Psychology, 6, 3545.Google Scholar
Holland, J. L. (1985). The self-directed search professional manual. Odessa, FL: Psychological Assessment Resources.Google Scholar
Holland, J. L. (1997). Making vocational choices: A theory of vocational personalities and work environments (3rd ed.). Odessa, FL: Psychological Assessment Resources.Google Scholar
Holland, J. L., Fritzsche, B. A., & Powell, A. B. (1997). The self-directed search technical manual. Odessa, FL: Psychological Assessment Resources.Google Scholar
Holland, J. L., & Messer, M. A. (2013). The self-directed search professional manual. Odessa, FL: Psychological Assessment ResourcesGoogle Scholar
Jenkins, J. A. (2013). Strong Interest Inventory and Skills Confidence Inventory. In Wood, C. & Hayes, D. G. (Eds.), A counselor’s guide to career assessment instruments (6th ed., pp. 280284). Broken Arrow, OK: National Career Development Association.Google Scholar
Juntunen, C. L. (2006). The psychology of working: The clinical context. Professional Psychology: Research and Practice, 37(4), 342350.Google Scholar
Leuty, M. E., & Hansen, J. C. (2011). Evidence of construct validity for work values. Journal of Vocational Behavior, 79, 379390.Google Scholar
Lewis, P., & Rivkin, D. (1999). Development of the O*NET interest profiler. www.onetcenter.org/dl_files/IP.pdfGoogle Scholar
McCloy, R., Waugh, G., Medsker, G., Wall, J., Rivkin, D., & Lewis, P. (1999). Development of the O*NET computerized Work Importance Profiler. www.onetcenter.org/dl_files/DevCWIP.pdfGoogle Scholar
McMahon, M., Watson, M., & Lee, M. C. Y. (2019). Qualitative career assessment: A review and reconsideration. Journal of Vocational Behavior, 110, 420-432doi.org/10.1016/j.jvb.2018.03.009Google Scholar
Metz, A. J., & Jones, J. E. (2013). Ability and aptitude assessment in career counseling. In Brown, S. D. & Lent, R. W. (Eds.), Career development and counseling: Putting theory and research to work (2nd ed., pp. 449476). New York: Wiley.Google Scholar
Nauta, M. (2010). The development, evolution, and status of Holland’s theory of vocational personalities: Reflections and future directions for counseling psychologists. Journal of Counseling Psychology, 57(1), 1122.Google Scholar
NCDA (National Career Development Association). (2009). Multicultural career counseling minimum competencies. www.ncda.org/aws/NCDA/pt/sp/guidelinesGoogle Scholar
Parsons, F. (1989). Choosing a vocation. Garrett Park, MD: Garrett Park Press. (Original work published 1909.)Google Scholar
Pope, M., Flores, L. Y., & Rottinghaus, P. J. (Eds.). (2014). The role of values in careers. Charlotte, NC: Information Age Publishing.Google Scholar
Porfeli, E. J., & Savickas, M. L. (2012). Career Adapt-Abilities Scale-USA form: Psychometric properties and relation to vocational identity. Journal of Vocational Behavior, 80, 748753.Google Scholar
Prince, J. P., & Potoczniak, M. J. (2012). Using psychological assessment tools with lesbian, gay, bisexual, and transgender clients. In Dworkin, S. H & Pope, M (Eds.), Casebook for counseling lesbian, gay, bisexual, and transgender persons and their families. Alexandria, VA: American Counseling Association.Google Scholar
Randahl, G. J., Hansen, J. C., & Haverkamp, B. E. (1993). Instrumental behaviors following test administration and interpretation: Exploration validity of the Strong Interest Inventory. Journal of Counseling and Development, 71 (4), 435439.Google Scholar
Rossier, J. (2015). Personality assessment and career interventions. In Hartung, P. J., Savickas, M. L., & Walsh, W. B. (Eds.), APA handbook of career intervention, Vol. 1: Foundations (pp. 327350). Washington, DC: American Psychological Association.Google Scholar
Rounds, J. B., Henley, G. A., Dawis, R. V., Lofquist, L. H., & Weiss, D. J. (1981). Manual for the Minnesota importance questionnaire. Minneapolis: University of Minnesota.Google Scholar
Rounds, J. B., & Jin, J. (2013). Nature, importance and assessment of needs and values. In Brown, S. D. & Lent, R. W. (Eds.), Career development and counseling: Putting theory and research to work (2nd ed., pp. 417448). New York: Wiley.Google Scholar
Rounds, J. B., Mazzeo, S. E., Smith, T. J., Hubert, L., Lewis, P., & Rivkin, D. (1999). O*NET Computerized Interest Profiler: Reliability, validity, and comparability. www.onetcenter.org/dl_files/CIP_RVC.pdfGoogle Scholar
Rounds, J. B., Ming, C. W. J., Cao, M., Song, C., & Lewis, P. (2016). Development of an O*NET® Mini Interest Profiler (Mini-IP) for mobile devices: Psychometric characteristics. www.onetcenter.org/reports/Mini-IP.htmlGoogle Scholar
Rounds, J. B., Su, R., Lewis, P. & Rivkin, D. (2010). O*NET interest profiler short form psychometric characteristics: Summary. www.onetcenter.org/dl_files/IPSF_Psychometric.pdfGoogle Scholar
Sampson, J. P., Jr., McClain, M., Dozier, C., Carr, D. L., Lumsden, J. A., & Osborn, D. S. (2013). Computer-assisted career assessment. In Wood, C. & Hayes, D. G. (Eds.), A counselor’s guide to career assessment instruments (6th ed., pp. 3347). Broken Arrow, OK: National Career Development Association.Google Scholar
Savickas, M. L. (1997). Career adaptability: An integrative construct for life-span, life-space theory. Career Development Quarterly, 45, 247259.Google Scholar
Savickas, M. L. (2013). Career construction theory and practice. In Lent, R. W. & Brown, S. D. (Eds.), Career development and counseling: Putting theory and research to work (2nd ed., pp. 147186). Hoboken, NJ: Wiley.Google Scholar
Savickas, M. L. (2018). Career counseling (2nd ed.). Washington, DC: American Psychological Association.Google Scholar
Savickas, M. L., & Porfeli, E. J. (2012). Career Adapt-Abilities Scale: Construction, reliability, and measurement equivalence across 13 countries. Journal of Vocational Behavior, 80, 661673.Google Scholar
Savickas, M. L., Taber, B. J., & Spokane, A. R. (2002). Convergent and discriminant validity of five interest inventories. Journal of Vocational Behavior, 61, 139184.Google Scholar
Schaubhut, N. A., & Thompson, R. C. (2016). Technical brief for the Strong Interest Inventory assessment: Using the Strong with LGBT populations. Palo Alto, CA: Consulting Psychologists Press.Google Scholar
Super, D. E. (1953). A theory of vocational development. American Psychologist, 8, 185190.Google Scholar
Super, D. E. (1955). The dimensions and measurement of vocational maturity. Teachers College Record, 57, 157163.Google Scholar
Super, D. E. (1990). A life-span, life-space approach to career development. In Brown, D. & Brooks, L. (Eds.), Career choice and development: Applying contemporary theories to practice (pp. 197261). San Francisco, CA: Jossey-Bass.Google Scholar
Super, D. E., & Knausel, E. G. (1981). Career development and adulthood: Some theoretical problems and a possible solution. British Journal of Guidance and Counselling, 9, 194201.Google Scholar
Swanson, J. L. (2012). Measurement and assessment. In Altmaier, E. M. & Hansen, J. C. (Eds.). The Oxford handbook of counseling psychology (pp. 208236). New York: Oxford University Press.Google Scholar
Swanson, J. L. (2013). Assessment of career development and maturity. In Geisinger, K. F. (Ed.), APA handbook of testing and assessment in psychology, Vol. 2: Testing and assessment in clinical and counseling psychology (pp. 349362). Washington, DC: American Psychological Association.Google Scholar
Swanson, J. L., & Fouad, N. A. (in press). Career theory and practice: Learning through case studies (4th ed.). Thousand Oaks, CA: Sage.Google Scholar
Swanson, J. L., & Schneider, M. (in press). Theory of Work Adjustment. In Brown, S. D. & Lent, R. W. (Eds.), Career development and counseling: Putting theory and research to work (3rd ed.). New York: Wiley.Google Scholar
Taylor, K. M., & Betz, N. E. (1983). Application of self-efficacy theory to the understanding and treatment of career indecision. Journal of Vocational Behavior, 22, 6381.Google Scholar
Walsh, W. B., & Betz, N. E. (2001). Tests and assessment. Upper Saddle River, NJ: Prentice Hall.Google Scholar
Zytowski, D. G. (2015). Test interpretation: Talking with people about their test results. In Hartung, P. J., Savickas, M. L., & Walsh, W. B. (Eds.), APA handbook of career intervention, Vol. 2: Applications (pp. 39). Washington, DC: American Psychological Association.Google Scholar

References

American Academy of Clinical Neuropsychology. (2017). Adult neuropsychology. https://theaacn.org/adult-neuropsychologyGoogle Scholar
APA (American Psychological Association). (2010). Ethical principles of psychologists and code of conduct.Google Scholar
Ashendorf, L., Vanderslice-Barr, J. L., & McCaffrey, R. J. (2009). Motor tests and cognition in healthy older adults. Applied Neuropsychology, 16, 171179.Google Scholar
Baldo, J. V., Arevalo, A., Patterson, J. P., & Dronkers, N. F. (2013). Gray and white matter correlates of picture naming: Evidence from a voxel-based lesion analysis of the Boston Naming Test. Cortex, 49, 658667.Google Scholar
Barrash, J., Stillman, A., Anderson, S. W., Uc, E. Y., Dawson, J. D., & Rizzo, M. (2010). Prediction of driving ability with neuropsychological testing: Demographic adjustments diminish accuracy. Journal of the International Neuropsychological Society, 16, 679686.Google Scholar
Bauer, R. M., Iverson, G. L., Cernich, A. N., Binder, L. M., Ruff, R. M., & Nagle, R. I. (2012). Computerized neuropsychological assessment devices: Joint position paper of the American Academy of Clinical Neuropsychology and the National Academy of Neuropsychology. Archives of Clinical Neuropsychology, 27, 362373.Google Scholar
Benton, A. L., Hamsher, K. deS., & Sivan, A. B. (1994). Multilingual aphasia examination (3rd ed.). San Antonio, TX: Psychological Corporation.Google Scholar
Bilder, R. (2011). Neuropsychology 3.0: Evidence-based science and practice. Journal of the International Neuropsychological Society, 17, 713.Google Scholar
Binder, L. M. (1982). Constructional strategies on Complex Figure drawings after unilateral brain damage. Journal of Clinical Neuropsychology, 4, 5158.CrossRefGoogle ScholarPubMed
Binder, L. M., & Binder, A. L. (2011). Relative subtest scatter in the WAIS-IV standardization sample. The Clinical Neuropsychologist, 25, 6271.Google Scholar
Binder, L. M., Iverson, G. L., & Brooks, B. L. (2009). To err is human: “Abnormal” neuropsychological scores and variability are common in healthy adults. Archives of Clinical Neuropsychology, 24, 3146.Google Scholar
Bohnen, N. I., Kuwabara, H., Constantine, G. M., Mathis, C. A., & Moore, R. Y. (2007). Grooved Pegboard as a test of biomarker nigrostriatal denervation in Parkinson’s disease. Neuroscience Letters, 424, 185189.Google Scholar
Bowman, M. L. (2002). The perfidy of percentiles. Archives of Clinical Neuropsychology, 17, 295303.Google Scholar
Brooks, B. L., Holdnack, J. A., & Iverson, G. L. (2011). Advanced clinical interpretation of the WAIS-IV and the WMS-IV: Prevalence of low scores varies by level of intelligence and years of education. Assessment, 18, 156167.Google Scholar
Brooks, B. L., Iverson, G. L., Holdnack, J. A., & Feldman, H. H. (2008). Potential for misclassification of mild cognitive impairment: A study of memory scores on the Wechsler Memory Scale-III in healthy older adults. Journal of the International Neuropsychological Society, 14, 463478.Google Scholar
Bush, S., & Lees-Haley, P. R. (2005). Threats to the validity of forensic neuropsychology data: Ethical considerations. Journal of Forensic Psychology, 4, 4566.Google Scholar
Casaletto, K. B., & Heaton, R. K. (2017). Neuropsychological assessment: Past and future. Journal of the International Neuropsychological Society, 23, 910.Google Scholar
Chelune, G. J. (2010). Evidence-based research and practice in clinical neuropsychology. The Clinical Neuropsychologist, 24, 454467.Google Scholar
Constantinou, M., Ashendorf, L., & McCaffrey, R. J. (2005). Effects of a third party observer during neuropsychological assessment: When the observer is a video camera. Journal of Forensic Neuropsychology, 4, 3947.Google Scholar
Crawfold, J. R., Garthwaite, P. H., Morrice, N., & Duff, K. (2012). Some supplementary methods for the analysis of the RBANS. Psychological Assessment, 24, 365374.Google Scholar
Dandachi-FitzGerald, B., Ponds, R. W. H. M., & Merten, T. (2013). Symptom validity and neuropsychological assessment: A survey of practices and beliefs of neuropsychologists in six European countries. Archives of Clinical Neuropsychology, 28, 771783.Google Scholar
DeFillips, N.A., & McCampbell, E. (1997). Booklet category test (2nd ed.). Odessa, FL: Psychological Assessment Resources.Google Scholar
Dekker, M. C., Ziermans, T. B., Spruijt, A. M., & Swaab, H. (2017). Cognitive, parent, and teacher rating measures of executive functioning: Shared and unique influences on school achievement. Frontiers in Psychology, 8. doi:103389/fpsyg.2017.00048Google Scholar
Delis, D. C., Kaplan, E., & Kramer, J. L. (2001). The Delis-Kaplan executive function system examiner’s manual. San Antonio, TX: Psychological Corporation.Google Scholar
Delis, D. C, Kramer, J. H., Kaplan, E., & Ober, B. A. (2017) California verbal learning test (3rd ed.). San Antonio, TX: Psychological Corporation.Google Scholar
Donnell, A., Belanger, H., & Vanderploeg, R. (2011). Implications of psychometric measurement for neuropsychological interpretation. The Clinical Neuropsychologist, 25, 10971118.Google Scholar
Elbulok-Chacape, M. M., Rabin, L. A., Spadaccini, A. T., & Barr, W. B. (2014). Trends in the neuropsychological assessment of ethnic/racial minorities: A survey of clinical neuropsychologists in the US and Canada. Cultural Diversity and Ethnic Minority Psychology, 20, 353361.Google Scholar
Folstein, M. F., Folstein, S. E., & McHugh, P. R. (1975). “Mini-mental state”: A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research, 12, 189198.Google Scholar
Gershon, R.C., Wagster, M.V., Hendrie, H.G., Fox, N.A., Cook, K.A., & Nowinski, C.J. (2013). NIH Toolbox for assessment of neurological and behavioral function. Neurology, 80, S2S6.Google Scholar
Golden, C. J., & Freshwater, S. M. (2002). Stroop Color and Word Test: Revised examiner’s manual. Wood Dale, IL: Stoelting Co.Google Scholar
Goodglass, H., Kaplan, E., & Barresi, B. (2001). Boston Diagnostic Aphasia Examination (3rd ed.). Philadelphia, PA: Lippincott Williams & Wilkins.Google Scholar
Haaland, K. Y., & Delaney, H. D. (1981). Motor deficits after left or right hemisphere damage due to stroke or tumor. Neuropscyhologia, 19, 1727.Google Scholar
Hannay, H. J., Bieliaukas, L., Crosson, B. A., Hammeke, T. A., Hamsher, K. deS., & Koffler, S. (1998). Proceedings of the Houston Conference on specialty education and training in clinical neuropsychology. Archives of Clinical Neuropsychology, 13, 157250.Google Scholar
Heaton, R. K., Chelune, G. J., Talley, J. L., Kay, G. G., & Curtis, G. (1993). Wisconsin Card Sorting Test (WCST) manual, revised and expanded. Odessa, FL: Psychological Assessment Resources.Google Scholar
Heaton, R. K., Miller, S. W., Taylor, M. J., & Grant, I. (2004). Revised comprehensive norms for an expanded Halstead-Reitan battery: Demographically adjusted neuropsychological norms for African-American and Caucasian adults. Lutz, FL: Psychological Assessment Resources.Google Scholar
Heilbronner, R.L., Sweet, J.J., Attix, D.K., Krull, K.R., Henry, G.K., & Hart, R.P. (2010). Official position of the American Academy of Clinical Neuropsychology on serial neuropsychological assessments: The utility and challenges of repeat test administrations in clinical and forensic contexts. The Clinical Neuropsychologist, 24, 12671278.Google Scholar
Hilsabeck, R. C. (2017). Psychometrics and statistics: Two pillars of neuropsychological practice. The Clinical Neuropsychologist, 31, 995999.Google Scholar
Horwitz, J. E., & McCaffrey, R. J. (2008). Effects of a third party observer and anxiety on tests of executive function. Archives of Clinical Neuropsychology 23, 409417.Google Scholar
Huizenga, H.M., ven Rentergem, J.A., Agelink, G., Raoul, P.P.P, Muslimovic, D., & Schmand, B. (2016). Normative comparisons for large neuropsychological test batteries: User-friendly and sensitive solutions to minimize familywise false positives. Journal of Clinical and Experimental Neuropsychology, 38, 611629.Google Scholar
Ivnik, R. J., Malec, J. F., Smith, G. E., Tangalos, E. G., & Peterson, R. C. (1996). Neuropsychological test norms above age 55: COWAT, BNT, MAE Token, WRAT-R Reading, AMNART, Stroop, TMT, and JLO. The Clinical Neuropsychologist, 10, 262278.Google Scholar
Jagaroo, V., & Santangelo, S. L. (Eds.). (2016). Neurophenotypes: Advancing psychiatry and neuropsychology in the “OMICS” era. New York: Springer.Google Scholar
Kaplan, E., Goodglass, H., & Weintraub, S. (2001). The Boston naming test-II. Philadelphia: Lea & Febinger.Google Scholar
Kaufman, P. M. (2009). Protecting raw data and psychological tests from wrongful disclosure: A primer on the law and other persuasive strategies. The Clinical Neuropsychologist, 23, 11301159.Google Scholar
Kløve, H. (1963). Clinical neuropsychology. In Forster, F. M. (Ed.), The medical clinics of North America. New York: Saunders.Google Scholar
Kongs, S. K., Thompson, L. L., Iverson, G. L., & Heaton, R. K. (2000). Wisconsin Card Sorting Test-64 Card Version. Lutz, FL: Psychological Assessment Resources.Google Scholar
Kramer, J.H., Mungas, D., Possin, K.L., Rankin, K.P., Boxer, A.L., … … … Widmeyer, M. (2014). NIH EXAMINER: conceptualization and developmental of an executive function battery. Journal of the International Neuropsychological Society, 20, 1119.Google Scholar
Lange, R. T., & Lippa, S. M. (2017). Sensitivity and specificity should never be interpreted in isolation without consideration of other clinical utility metrics. The Clinical Neuropsychologist, 31, 10151028.Google Scholar
Lezak, M. D., Howieson, D.B., Bigler, E.D., & Tranel, D. (2012). Neuropsychological assessment (5th ed.). New York: Oxford University Press.Google Scholar
Lucas, J. A., Ivnik, R. J., Smith, G. E., Ferman, T. J., Willis, F. B., Peterson, R. C., & Graff-Radford, N. R. (2005). Mayo’s Older African Americans Normative Studies: Norms for Boston Naming Test, Controlled Oral Word Association, Category Fluency, Animal Naming, Token Test, WRAT-3 Reading, Trail Making Test, Stroop Test, and Judgment of Line Orientation. The Clinical Neuropsychologist, 19, 243269.Google Scholar
Lucas, J. A., Ivnik, R. J., Willis, F. B., Ferman, T. J., Smith, G. E., Parfit, et al. (2005). Mayo’s older African Americans normative studies: Normative data for commonly used neuropsychological tests. The Clinical Neuropsychologist, 19, 162183.Google Scholar
Manly, J. J., Jacobs, D. M., Sano, M., Bell, K., Merchant, C. A., Small, S. A., & Stern, Y. (1998). Cognitive test performance among nondemented elderly African Americans and Whites. Neurology, 50, 12381245.Google Scholar
Martin, P. K., Schroeder, R.W., & Odland, A. P. (2015). Neuropsychologists’ validity testing beliefs and practices: A survey of North American professionals. The Clinical Neuropsychologist, 29, 741776.Google Scholar
Messerli, P., Seron, X., & Tissot, R. (1979). Quelques aspects des troubles de la programmation dans le syndrome frontal. Archives Suisse de Neurologie, Neurchirugie et Psychiatrie, 125, 2335.Google Scholar
Meyers, J. E. (2017). Fixed battery. In Kreutzer, J., DeLuca, J., & Caplan, B. (Eds.), Encyclopedia of clinical neuropsychology (pp. 12). New York: Springer.Google Scholar
Meyers, J. E., & Meyers, K. (1995). The Meyers scoring system for the Rey complex figure and the recognition trial: Professional manual. Odessa, FL: Psychological Assessment Resources.Google Scholar
Mitrushina, M. N., Boone, K. B., Razani, J., & d’Elia, L. F. (2005). Handbook of normative data for neuropsychological assessment (2nd ed.). New York: Oxford University Press.Google Scholar
Miyake, A., Friedman, N.P., Emerson, M.J., Witzki, A.H., & Howerter, A. (2000). The unity and diversity of executive functions and their contributions to complex ‘frontal lobe’ tasks: A latent variable analysis. Cognitive Psychology, 41, 49100.Google Scholar
Moering, R.G., Schinka, J.A., Mortimer, J.A., & Graves, A.B. (2004). Normative data for elderly African Americans for the Stroop Color and Word Test. Archives of Clinical Neuropsychology, 9, 6171.Google Scholar
National Academy of Neuropsychology. (2003). Test security: An update. Official statement of the National Academy of Neuropsychology. Author. https://www.nanonline.org/docs/PAIC/PDFs/NANTestSecurity Update.pdfGoogle Scholar
Nuechterlein, K. H., Green, M. F., Kern, R. S., Baade, L. E., Barch, D. M., Cohen, T. D. et al. (2008). The MATRICS Consensus Cognitive Battery, part 1: Test selection, reliability, and validity. American Journal of Psychiatry, 165, 203213.Google Scholar
Osterrieth, P. A. (1944). Le test de copie d’une figure complexe. Archives de Psychologie, 30, 206356 [trans. J. Corwin and F. W. Bylsma (1993), The Clinical Neuropsychologist, 7, 9–15].Google Scholar
Otto, R. K., & Krauss, D. A. (2009). Contemplating the presence of third party observers and facilitators in psychological evaluations. Assessment, 16, 362372.Google Scholar
Parsey, C. M., & Schmitter-Edgecombe, M. (2013). Applications of technology in neuropsychological assessment. The Clinical Neuropsychologist, 27, 13281361.Google Scholar
Parsons, T. D. (2015). Virtual reality for enhanced ecological validity and experimental control in the clinical, affective, and social neurosciences. Frontiers in Human Neuroscience, 9, 660.Google Scholar
Rabin, L. A., Paolillo, E., & Barr, W. B. (2016). Stability in test-usage practices of clinical neuropsychologists in the United States and Canada over a 1 10-year period: A follow-up survey of INS and NAN. Archives of Clinical Neuropsychology, 31, 206230.Google Scholar
Rabin, L. A., Spadaccini, A. T., Brodale, D. L., Grant, K. S., Elbulok-Charcape, M. M., & Barr, W. B. (2014). Utilization rates of computerized tests and test batteries among clinical neuropsychologists in the United States and Canada. Professional Psychology: Research and Practice, 45, 368377.Google Scholar
Reeves, D.L., Winter, K.P., Bleiberg, J., & Kane, R.L. (2007). ANAM Genogram: Historical perspectives, description, and current endeavors. Archives of Clinical Neuropsychology, 22, S15S37.Google Scholar
Reitan, R. M. (1955). The relation of the Trail Making Test to organic brain damage. Journal of Consulting Psychology, 19, 393394.Google Scholar
Reitan, R. M., & Wolfson, D. (1985). The Halstead-Reitan neuropsychological test battery: Theory and clinical interpretation. Tucson, AZ: Neuropsychology Press.Google Scholar
Reitan, R. M., & Wolfson, D. (1993). The Halstead-Reitan Neuropsychological Test Battery: Theory and clinical interpretation (2nd ed.). Tucson, AZ: Neuropsychology Press.Google Scholar
Rhodes, M. G. (2004). Age-related differences in performance on the Wisconsin Card Sorting Test: A meta-analytic review. Psychology and Aging, 19, 482494.Google Scholar
Romero, H. R., Lageman, S. K., Kamath, V., Irani, F., Sim, A., Suarez, P. et al. (2009). Challenges in the neuropsychological assessment of ethnic minorities: Summit proceedings. The Clinical Neuropsychologist, 23, 761779.Google Scholar
Sawrie, S. M., Chelune, G. J., Naugle, R. I., & Luders, H. O. (1996). Empirical methods for assessing meaningful change following epilepsy surgery. Journal of the International Neuropsychological Society, 2, 556564.Google Scholar
SCN (Society for Clinical Neuropsychology Division 40 of the American Psychological Association). (2015). About the Society for Clinical Neuropsychology. www.scn40.org/about-scn.htmlGoogle Scholar
Silverberg, N. D., & Millis, S. R. (2009). Impairment versus deficiency in neuropsychological assessment: Implications for ecological validity. Journal of the International Neuropsychological Society, 15, 94102.Google Scholar
Smith, G. E., Ivnik, R. J., & Lucas, J. (2008). Assessment techniques: Tests, test batteries, norms, and methodological approaches. In Morgan, J. E. & Ricker, J. H. (Eds.), Textbook of Clinical Neuropsychology (pp. 3857). New York: Taylor & Francis.Google Scholar
Steinberg, B. A., Bieliauskas, L. A., Smith, G. E., & Ivnik, R. J. (2005). Mayo Older Americans Normative Studies: Age- and IQ-adjusted norms for the Trail-Making Test, the Stroop Tees, and MAE Controlled Oral Word Association Test. The Clinical Neuropsychologist, 19, 329377.Google Scholar
Storms, G., Saerens, J., & DeDeyn, P. P. (2004). Normative data for the Boston Naming Test in native Dutch-speaking Belgian children and the relation with intelligence. Brain and Language, 91, 274281.Google Scholar
Strauss, E., Sherman, E. M. S., & Spreen, O. (2006). A compendium of neuropsychological tests (3rd ed.). New York: Oxford University Press.Google Scholar
Suhr, J. A. (2015). Empirically based assessment: A problem-solving approach. New York: Guilford Press.Google Scholar
Sweet, J. J., & Breting, L. G. (2012). Affidavit regarding opposition to the presence of third party observers and recording of neuropsychological and psychological assessments performed in the state of Illinois. https://theaacn.org/wp- content/uploads/2015/10/third-party-observer-affidavit-completed-2013-all-abcn-illinois-practicing-psychologists.pdfGoogle Scholar
Sweet, J. J., Benson, L. M., Nelson, N. W., & Moberg, P. J. (2015). The American Academy of Clinical Neuropsychology, National Academy of Neuropsychology, and Society for Clinical Neuropsychology (APA, Division 40) 2014 TCN professional practice and ‘salary survey’: Professional practices, beliefs, and incomes of U.S. neuropsychologists. The Clinical Neuropsychologist, 29, 10691162.Google Scholar
Tombaugh, T. N., & McIntyre, N. J. (1992). The Mini-Mental State Examination: A comprehensive review. Journal of American Geriatric Society, 40, 922935.Google Scholar
Wechsler, D. (2008). Wechsler Memory Scale – Fourth Edition (WMS-IV): Technical and interpretive manual. San Antonio, TX: Pearson.Google Scholar
Yantz, C. L., & McCaffrey, R. J. (2005). Effects of a supervisor’s observation on memory test performance of the examinee: Third party observer effect confirmed. Journal of Forensic Neuropsychology, 4, 2738.Google Scholar

References

Adler, T. (1990). Does the “new” MMPI beat the “classic”? APA Monitor, April, pp. 18–19.Google Scholar
American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Washington, DC: American Psychiatric Association.Google Scholar
Anderson, J. L., Sellbom, M., Ayearst, L., Quilty, L. C., Chmielewski, M., Bagby, R. M. (2015). Associations between DSM-5 Section III personality traits and the Minnesota Multiphasic Personality Inventory 2-Restructured Form (MMPI-2-RF) scales in a psychiatric patient sample. Psychological Assessment, 27, 811815.Google Scholar
Anderson, J. L., Sellbom, M., Bagby, R. M., Quilty, L. C., Veltri, C. O. C., Markon, K. E., & Krueger, R. F. (2013). On the convergence between PSY-5 domains and PID-5 domains and facets: Implications for assessment of DSM-5 personality traits. Assessment, 20, 286294.Google Scholar
Anderson, J. L., Sellbom, M., Pymont, C., Smid, W., De Saeger, H., & Kamphuis, J. H. (2015). Measurement of DSM-5 Section II personality disorder constructs using the MMPI-2-RF in clinical and forensic samples. Psychological Assessment, 27, 786800.Google Scholar
Anestis, J. C., Finn, J. A., Gottfried, E. D., Arbisi, P. A., & Joiner, T. E. (2015). Reading the road signs: The utility of the MMPI-2 Restructured Form Validity Scales in prediction of premature termination. Assessment, 22, 279288.Google Scholar
Anestis, J. C., Gottfried, E. D., & Joiner, T. E. (2015). The utility of MMPI-2-RF substantive scales in prediction of negative treatment outcomes in a community mental health center. Assessment, 22, 2335.Google Scholar
Arbisi, P. A., Polusny, M. A., Erbes, C. R., Thuras, P., & Reddy, M. K. (2011). The Minnesota Multiphasic Personality Inventory 2 Restructured Form in National Guard soldiers screening positive for Posttraumatic Stress Disorder and mild traumatic brain injury. Psychological Assessment, 23, 203214.Google Scholar
Archer, R. P., Maruish, M., Imhof, E. A., & Piotrowski, C. (1991). Psychological test usages with adolescent clients: 1990 survey findings. Professional Psychology Research and Practice, 22, 247252.Google Scholar
Archer, E. M., Hagan, L. D., Mason, J., Handel, R. W., & Archer, R. P. (2012). MMPI-2-RF characteristics of custody evaluation litigants. Assessment, 19, 1420.Google Scholar
Archer, R. P., Handel, R. W., Ben-Porath, Y. S., & Tellegen, A. (2016). Minnesota Multiphasic Personality Inventory – Adolescent Restructured Form (MMPI-A-RF): Administration, scoring, interpretation, and technical manual.Minneapolis: University of Minnesota Press.Google Scholar
Avdeyeva, T. V., Tellegen, A., & Ben-Porath, Y. S. (2011). Empirical correlates of low scores on MMPI-2/MMPI-2-RF Restructured Clinical Scales in a sample of university students. Assessment, 19, 388393.Google Scholar
Ayearst, L. E., Sellbom, M., Trobst, K. K., & Bagby, R. M. (2013). Evaluating the interpersonal content of the MMPI-2-RF Interpersonal Scales. Journal of Personality Assessment, 95, 187196.Google Scholar
Ben-Porath, Y. S. (2012). Interpreting the MMPI-2-RF. Minneapolis: University of Minnesota Press.Google Scholar
Ben-Porath, Y. S., & Forbey, J. D. (2003). Non-gendered norms for the MMPI-2. Minneapolis: University of Minnesota Press.Google Scholar
Ben-Porath, Y. S., & Tellegen, A. (2008/2011). Minnesota Multiphasic Personality Inventory-2- Restructured Form (MMPI-2-RF): Manual for administration, scoring, and interpretation. Minneapolis: University of Minnesota Press.Google Scholar
Block, A. R., & Ben-Porath, Y. S. (2018). MMPI-2-RF (Minnesota Multiphasic Personality Inventory-2 Restructured Form): User’s guide for the Spine Surgery Candidate and Spinal Cord Stimulator Candidate Interpretive Reports. Minneapolis: University of Minnesota Press.Google Scholar
Block, A. R., Ben-Porath, Y. S., & Marek, R. J. (2013). Psychological risk factors for poor outcome of spine surgery and spinal cord stimulator implant: A review of the literature and their assessment with the MMPI-2-RF. The Clinical Neuropsychologist, 27, 81107.Google Scholar
Block, A. R., Marek, R. J., Ben-Porath, Y. S., & Ohnmeiss, D. D. (2014). Associations between MMPI-2-RF scores, workers’ compensation status, and spine surgery outcome. Journal of Applied Biobehavioral Research, 19, 248267.Google Scholar
Brown, T. A., & Sellbom, M. (in press). The utility of the MMPI-2-RF validity scales in detecting underreporting. Journal of Personality Assessment.Google Scholar
Butcher, J. N. (1972). Objective personality assessment: Changing perspectives. New York: Academic Press.Google Scholar
Butcher, J. N., Dahlstrom, W. G., Graham, J. R., Tellegen, A., & Kaemmer, B. (1989). Minnesota Multiphasic Personality Inventory-2 (MMPI-2): Manual for administration and scoring. Minneapolis: University of Minnesota Press.Google Scholar
Butcher, J. N., Graham, J. R., Ben-Porath, Y. S., Tellegen, A., Dahlstrom, W. G., & Kaemmer, B. (2001). MMPI-2: Manual for administration and scoring (rev. ed.). Minneapolis: University of Minnesota Press.Google Scholar
Butcher, J. N., Graham, J. R., Williams, C. L., & Ben-Porath, Y. S. (1990). Development and use of the MMPI-2 Content Scales. Minneapolis: University of Minnesota Press.Google Scholar
Butcher, J. N., Williams, C. L., Graham, J. R., Archer, R. P., Tellegen, A., Ben-Porath, Y. S., & Kaemmer, B. (1992). Minnesota Multiphasic Personality Inventory (MMPI-A): Manual for administration, scoring and interpretation. Minneapolis: University of Minnesota Press.Google Scholar
Camara, W. J., Nathan, J. S., & Puente, A. E. (2000). Psychological test usage: Implications in professional psychology. Professional Psychology: Research and Practice, 31, 141154.Google Scholar
Capwell, D. F. (1945). Personality patterns of adolescent girls. II. Delinquents and non-delinquents. Journal of Applied Psychology, 29, 284297.Google Scholar
Choi, J. Y. (2017). Posttraumatic stress symptoms and dissociation between childhood trauma and two different types of psychosis-like experience. Child Abuse and Neglect, 72, 404410.Google Scholar
Corey, D. M., & Ben-Porath, Y. S. (2014). MMPI-2-RF (Minnesota Multiphasic Personality Inventory-2-Restructured Form): User’s guide for the Police Candidate Interpretive Report. Minneapolis, MN: University of Minnesota Press.Google Scholar
Corey, D. M., & Ben-Porath, Y. S. (2018). Assessing police and other public safety personnel using the MMPI-2-RF. Minneapolis, MN: University of Minnesota Press.Google Scholar
Corey, D. M., Sellbom, M., & Ben-Porath, Y. S. (2018). Risks Associated with Overcontrolled Behavior in Police Officer Recruits. Psychological Assessment, 30, 16911702.Google Scholar
Crighton, A. H., Tarescavage, A. M. Gervais, R. O., & Ben-Porath, Y. S. (2017). The generalizability of over-reporting across a battery of self-report measures: An investigation with the Minnesota Multiphasic Personality Invemtory-2 Restructured Form (MMPI-2-RF) and the Personality Assessment Inventory (PAI) in a civil disability sample. Assessment, 24, 555574.Google Scholar
Dahlstrom, W. G., Welsh, G. S., & Dahlstrom, L. E. (1972). An MMPI handbook, Vol. 1: Clinical interpretation (rev. ed.). Minneapolis: University of Minnesota Press.Google Scholar
Detrick, P., & Chibnall, J. T. (2014). Underreporting on the MMPI-2-RF in a high demand police officer selection content: An illustration. Psychological Assessment, 26, 10441049.Google Scholar
Forbey, J.D., & Ben-Porath, Y.S. (1998). A critical Item set for the MMPI-A. Minneapolis: University of Minnesota Press.Google Scholar
Gilberstadt, H., & Duker, J. (1965). A handbook for clinical and actuarial MMPI interpretation. Philadelphia: Saunders.Google Scholar
Glassmire, D. M., Jhawar, A., Burchett, D., & Tarescavage, A. M. (2016). Evaluating item endorsement rates for the MMPI-2-RF F-r and Fp-r scales across ethnic, gender, and diagnostic groups with a forensic inpatient unit. Psychological Assessment, 29, 500508.Google Scholar
Grossi, L. M., Green, D., Belfi, B., McGrath, R.E., Griswald, H., & Schreiber, J. (2015). Identifying aggression in forensic inpatients using the MMPI-2-RF: An examination of MMPI-2-RF scale scores and estimated psychopathy indices. International Journal of Forensic Mental Health, 14, 231244.Google Scholar
Handel, R. W., Ben-Porath, Y. S., Tellegen, A., & Archer, R. P. (2010). Psychometric Functioning of the MMPI-2-RF VRIN-r and TRIN-r scales with varying degrees of randomness, acquiescence, and counter-acquiescence. Psychological Assessment, 22, 8795.Google Scholar
Harkness, A. R., Finn, J. A., McNulty, J. L., & Shields, S. M. (2012). The Personality Psychopathology Five (PSY–5): Recent constructive replication and assessment literature review. Psychological Assessment, 24, 432443.Google Scholar
Harkness, A. R., & McNulty, J. L. (1994). The Personality Psychopathology Five (PSY-5): Issues from the pages of a diagnostic manual instead of a dictionary. In Strack, S. & Lorr, M. (Eds.), Differentiating normal and abnormal personality (pp. 291315). New York: Springer.Google Scholar
Harkness, A. R., McNulty, J. L., & Ben-Porath, Y. S. (1995). The Personality Psychopathology Five (PSY-5): Constructs and MMPI-2 scales. Psychological Assessment, 7, 104114.Google Scholar
Hathaway, S. R. (1960). Foreword. In Dahlstrom, W. G. & Welsh, G. S. (Eds.), An MMPI handbook: A guide to use in clinical practice and research (pp. viixi). Minneapolis: University of Minnesota Press.Google Scholar
Hathaway, S. R. (1972). Where have we gone wrong? The mystery of the missing progress. In Butcher, J. N. (Ed.), Objective personality assessment: Changing perspectives (pp. 2143). Oxford: Academic Press.Google Scholar
Hathaway, S. R., & McKinley, J. C. (1940). A multiphasic personality schedule (Minnesota): I. Construction of the schedule. Journal of Psychology, 10, 249254.Google Scholar
Hathaway, S. R., & McKinley, J. C. (1942). A multiphasic personality schedule (Minnesota): III. The measurement of symptomatic depression. Journal of Psychology, 14, 7384.Google Scholar
Hathaway, S. R., & McKinley, J. C. (1943). The Minnesota Multiphasic Personality Inventory. Minneapolis: University of Minnesota Press.Google Scholar
Hathaway, S. R., & Monachesi, E. D. (1951). The prediction of juvenile delinquency using the Minnesota Multiphasic Personality Inventory. American Journal of Psychiatry, 108, 469473.Google Scholar
Hoelzle, J. B., & Meyer, G. J. (2008). The factor structure of the MMPI-2 Restructured Clinical (RC) Scales. Journal of Personality Assessment, 90, 443455.Google Scholar
Ingram, P. B., & Ternes., M. S. (2016). The detection of content-based invalid responding: a meta-analysis of the MMPI-2-Restructured Form’s (MMPI-2-RF) over-reporting validity scales. The Clinical Neuropsychologist, 30, 473496.Google Scholar
Kauffman, C. M., Stolberg, R., & Madero, J. (2015). An examination of the MMPI-2-RF (Restructured From) with the MMPI-2 and MCMI-III in child custody litigants. Journal of Child Custody, 12, 129151.Google Scholar
Kim, S., Goodman, G. M., Toruno, J. A., Sherry, A. R., & Kim, H. K. (2015). The cross-cultural validity of the MMPI-2-RF Higher-Order scales in a sample of North Korean female refugees. Assessment, 22, 640649.Google Scholar
Koffel, E., Kramer, M. D., Arbisi, P. A., Erbes, C. R., Kaler, M., & Polusny, M. A. (2016). Personality traits and combat exposure as predictors of psychopathology over time. Psychological Medicine, 46, 209220.Google Scholar
Kotov, R., Krueger, R. F, Watson., D., Achenbach, T. M., Althoff, R. R., Bagby, R. M., et al. (2017). The Hierarchical Taxonomy of Psychopathology (HiTOP): A dimensional alternative to traditional nosologies. Journal of Abnormal Psychology, 126, 454477.Google Scholar
Laurinaityte, I., Laurinavicius, A., Ustinaviciute, L., Wygant, D.B., & Sellbom, M. (2017). Utility of the MMPI-w Restructured Form (MMPI-2-RF) in a sample of Lithuanian male offenders. Law and Human Behavior, 41, 494505.Google Scholar
Lee, T. T. C., Graham, J. R., & Arbisi, P. A. (2018). The utility of MMPI-2-RF scale scores in differential diagnosis of Schizophrenia and Major Depressive Disorder. Journal of Personality Assessment, 100, 305312.Google Scholar
Lee, T. T. C., Sellbom, M., & Hopwood, C.J. (2017). Contemporary psychopathology assessment: Mapping major personality inventories onto empirical models of psychopathology. In Bowden, S. C. (Ed.), Neuropsychological assessment in the age of evidence-based practice: Diagnostic and treatment evaluations (pp. 64–94). New York: Oxford University Press.Google Scholar
Locke, D. E. C., Kirlin, K. A., Thomas, M. L., Osborne, D., Hurst, D. F., Drazkowsi, J. F., et al. (2010). The Minnesota Multiphasic Personality Inventory–Restructured Form in the epilepsy monitoring unit. Epilepsy and Behavior, 17, 252258.Google Scholar
Locke, D. E. C., Kirlin, K. A., Wershaba, R., Osborne, D., Drazkowski, J. F., Sirven, J. I., & Noe, K. H. (2011). Randomized comparison of the Personality Assessment Inventory and the Minnesota Multiphasic Personality Inventory-2, in the epilepsy monitoring unit. Epilepsy and Behavior, 21, 397401.Google Scholar
Marek, R. J., Ben-Porath, Y. S., Epker, J. T., Kreymar, J. K., & Block, A. R. (2018). Reliability and Validity of the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) in Spine Surgery and Spinal Cord Stimulator Samples. Journal of Personality Assessment.Google Scholar
Marek, R. J., Ben-Porath, Y. S., Merrell, J., Ashton, K., & Heinberg, L. J. (2014). Predicting one and three month post-operative somatic concerns, psychological distress, and maladaptive eating behavior in bariatric surgery candidates with the Minnesota Multiphasic Personality Iniventory-2 Restructured Form (MMPI-2-RF). Obesity Surgery, 24, 631639.Google Scholar
Marek, R. J., Ben-Porath, Y. S., Sellbom, M., McNulty, J. L., & Heinberg, L. J. (2015). Validity of Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) scores as a function of gender, ethnicity, and age of bariatric surgery candidates. Surgery for Obesity and Related Diseases, 11, 627636.Google Scholar
Marek, R. J., Ben-Porath, Y. S., Van Dulmen, M., Ashton, K., & Heinberg, L. J. (2017). Using the pre-surgical psychological evaluation to predict 5-year weight-loss outcomes in bariatric surgery patients. Surgery for Obesity and Related Diseases, 13, 514521.Google Scholar
Marek, R. J., Ben-Porath, Y. S., Windover, A. K., Tarescavage, A. M., Merrell, J., Ashton, K., Lavery, M., & Heinberg, L. J. (2013). Assessing psychosocial functioning of bariatric surgery candidates with the Minnesota Multiphasic Personality Inventory-2-Restructured Form. Obesity Surgery, 23, 18641873.Google Scholar
Marek, R. J., Block, A. R., & Ben-Porath, Y. S. (2015). The Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF): Incremental validity in predicting early post-operative outcomes in spine surgery candidates. Psychological Assessment, 27, 114124.Google Scholar
Marek, R. J., Tarescavage, A. M., Ben-Porath, Y. S., Ashton, K., Heinberg, L. J., & Rish, J.M. (2017). Associations between psychological test results and failure to proceed with bariatric surgery. Surgery for Obesity and Related Diseases, 13, 507513.Google Scholar
Marek, R. J., Tarescavage, A. M., Ben-Porath, Y. S., Ashton, K., Rish, J. M., & Heinberg, L. J. (2015). Using presurgical psychological testing to predict 1-year appointment adherence and weight loss in bariatric surgery candidates: Predictive validity and methodological considerations. Surgery for Obesity and Related Diseases. 11, 11711181.Google Scholar
Marks, P. A., & Seeman, W. (1963). The actuarial description of abnormal personality: An atlas for use with the MMPI. Baltimore, MD: Williams & Wilkins.Google Scholar
Martinez, U., Fernandez del Rio, E., Lopez-Duran, A., & Becona, E. (2017). The utility of the MMPI-2-RF to predict the outcome of a smoking cessation treatment. Personality and Individual Differences, 106, 172177.Google Scholar
Martinez, U., Fernandez del Rio, E., Lopez-Duran, A, Martinez-Vispo, C., & Becona, E. (2018). Types of smokers who seek smoking cessation treatment according to psychopathology. Journal of Dual Diagnosis, 14, 5059.Google Scholar
McNulty, J. L., & Overstreet, S. R. (2014). Viewing the MMPI-2-RF structure through the Personality Psychopathology Five (PSY-5) lens. Journal of Personality Assessment. 96, 151157.Google Scholar
Meehl, P. E. (1945). The dynamics of “structured” personality tests. Journal of Clinical Psychology, 1, 296303.Google Scholar
Meehl, P. E. (1956). Wanted – a good cook-book. American Psychologist, 11(6), 263272.Google Scholar
Meehl, P. E. (1972). Reactions, reflections, projections. In Butcher, J. N. (Ed.), Objective personality assessment: Changing perspectives (pp. 131189). Oxford: Academic Press.Google Scholar
Meyers, J. E., Miller, R. M., & Tuita, A. R. R. (2014). Using pattern analysis matching todifferentiate TBI and PTSD in a military sample. Applied Neuropsychology: Adult, 21, 6068.Google Scholar
Mihura, J. L., Roy, M., & Graceffo, R. A. (2017). Psychological assessment training in clinical psychology doctoral programs. Journal of Personality Assessment, 99, 153164.Google Scholar
Moultrie, J. K., & Engel, R. R. (2017). Empirical correlates for the Minnesota Multiphasic Personality Inventory-2-Restructured From in a German inpatient sample. Psychological Assessment, 29, 12731289.Google Scholar
Myers, L., Lancman, M., Laban-Grant, O., Matzner, B., & Lancman, M. (2012). Psychogenic non-epileptic seizures: Predisposing factors to diminished quality of life. Epilepsy and Behavior, 25, 358362.Google Scholar
Myers, L., Matzner, B., Lancman, M., Perrine, K., & Lancman, M. (2013). Prevalence of alexithymia in patients with psychogenic non-epileptic seizures and epileptic seizures and predictors in psychogenic non-epileptic seizures. Epilepsy and Behavior, 26, 153157.Google Scholar
Neal, T. M., & Grisso, T. (2014). Assessment practices and expert judgment methods in forensic psychology and psychiatry: An international snapshot. Criminal Justice and Behavior, 41, 14061421.Google Scholar
Pinsoneault, T. B., & Ezzo, F. R. (2012). A comparison of MMPI-2-RF profiles between child maltreatment and non-maltreatment custody cases. Journal of Forensic Psychology Practice, 12, 227237.Google Scholar
Resendes, J., & Lecci, L. (2012). Comparing the MMPI-2 scale scores of parents involved in parental competency and child custody assessments. Psychological Assessment, 24, 10541059.Google Scholar
Sellbom, M. (2016). Elucidating the validity of the externalizing spectrum of psychopathology in correctional, forensic, and community samples. Journal of Abnormal Psychology, 125, 10271038.Google Scholar
Sellbom, M. (2017a). Mapping the MMPI-2-RF Specific Problems scales onto Extant Psychopathology Structures. Journal of Personality Assessment, 99, 341350.Google Scholar
Sellbom, M. (2017b). Using the MMPI-2-RF to Characterize Defendants Evaluated for Competency to Stand Trial and Criminal Responsibility. International Journal of Forensic Mental Health, 16, 304312.Google Scholar
Sellbom, M. (2019). The MMPI-2-Restructured Form (MMPI-2-RF): Assessment of personality and psychopathology in the twenty-first century. Annual Review of Clinical Psychology, 15, 149177.Google Scholar
Sellbom, M., Anderson, J. L., & Bagby, R. M. (2013). Assessing DSM-5 Section III personality traits and disorders with the MMPI-2-RF. Assessment, 20, 709722.Google Scholar
Sellbom, M., & Bagby, R. M. (2008). Validity of the MMPI-2-RF (Restructured Form) L-r and K-r scales in detecting underreporting in clinical and nonclinical samples. Psychological Assessment, 20, 370376.Google Scholar
Sellbom, M., Bagby, R. M., Kushner, S., Quilty, L. C., & Ayearst, L. E. (2012). Diagnostic construct validity of MMPI-2 Restructured Form (MMPI-2-RF) scale scores. Assessment, 19(2), 176186.Google Scholar
Sellbom, M., & Ben-Porath, Y. S. (2005). Mapping the MMPI-2 Restructured Clinical Scales onto normal personality traits: Evidence of construct validity. Journal of Personality Assessment, 85, 179187.Google Scholar
Sellbom, M., Ben-Porath, Y. S., & Bagby, R. M. (2008a). On the hierarchical structure of mood and anxiety disorders: Confirmatory evidence and elaboration of a model of temperament markers. Journal of Abnormal Psychology, 117, 576590.Google Scholar
Sellbom, M., Ben-Porath, Y. S., & Bagby, R. M. (2008b). Personality and psychopathology: Mapping the MMPI-2 Restructured Clinical (RC) Scales onto the Five Factor Model of personality. Journal of Personality Disorders, 22, 291312.Google Scholar
Sellbom, M., Ben-Porath, Y. S., Baum, L. J., Erez, E., & Gregory, C. (2008). Predictive validity of the MMPI-2 Restructured Clinical (RC) Scales in a batterers’ intervention program. Journal of Personality Assessment, 90, 129135.Google Scholar
Sellbom, M., Fischler, G. L., & Ben-Porath, Y. S. (2007). Identifying MMPI-2 predictors of police officer integrity and misconduct. Criminal Justice and Behavior, 34, 9851004.Google Scholar
Sellbom, M., Lee, T. T. C., Ben-Porath, Y. S., Arbisi, P. A., & Gervais, R. O. (2012). Differentiating PTSD symptomatology with the MMPI-2-RF (Restructured Form) in a forensic disability sample. Psychiatry Research, 197, 172179.Google Scholar
Sellbom, M., Smid, W., De Saeger, H., Smit, N., & Kamphuis, J. H. (2014). Mapping the Personality Psychopathology Five domains onto DSM-IV personality disorders in Dutch clinical and forensic samples: Implications for the DSM-5. Journal of Personality Assessment, 96, 185191.Google Scholar
Sellbom, M., & Smith, A. (2017). Assessment of DSM-5 Section II personality disorders with the MMPI-2-RF in a nonclinical sample. Journal of Personality Assessment, 99, 384397.Google Scholar
Sharf, A. J., Rogers, R., Williams, M. M., & Henry, S. A. (2017). The effectiveness of the MMPI-2-RF in detecting feigned mental disorders and cognitive deficits: A meta-analysis. Journal of Psychopathology and Behavioral Assessment, 39, 441455.Google Scholar
Tarescavage, A. M., & Ben-Porath, Y. S. ( 2017 ). Examination of the feasibility and utility of Flexible and Conditional Administration (FCA) of the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF). Psychological Assessment, 29, 13371348.Google Scholar
Tarescavage, A. M., Brewster, J., Corey, D. M., & Ben-Porath, Y. S. (2015). Use of pre-hire MMPI-2-RF police candidate scores to predict supervisor ratings of post-hire performance. Assessment, 22, 411428.Google Scholar
Tarescavage, A. M., Cappo, B. M., & Ben-Porath, Y. S. (2018). Assessment of sex offenders with the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF). Sexual Abuse: A Journal of Research and Treatment, 30, 413437.Google Scholar
Tarescavage, A. M, Corey, D. M., & Ben-Porath, Y. S. (2015). Minnesota Multiphasic Personality Inventory Restructured form (MMPI-2-RF) predictors if police officer problem behavior. Assessment, 22, 116132.Google Scholar
Tarescavage, A. M., Corey, D. M., & Ben-Porath, Y. S. (2016). A prorating method for estimating MMPI-2-RF Scores from MMPI responses: Examination of score fidelity and illustration of empirical utility in the PERSEREC Police Integrity Study sample. Assessment, 23, 173190.Google Scholar
Tarescavage, A. M., Corey, D. M., Gupton, H. M., & Ben-Porath, Y. S. (2015). Criterion validity and clinical utility of the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) in assessments of police officer candidates. Journal of Personality Assessment, 97, 382394.Google Scholar
Tarescavage, A. M., Finn, J. A., Marek, R. J., Ben-Porath, Y. S., & van Dulmen, M. H. M (2015). Premature termination from psychotherapy and internalizing psychopathology: The role of demoralization. Journal of Affective Disorders, 174, 549555.Google Scholar
Tarescavage, A. M., Fischler, G. L., Cappo, B., Hill, D. O., Corey, D. M., & Ben-Porath, Y. S. (2015). Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) predictors of police officer problem behavior and collateral self-report test scores. Psychological Assessment, 27, 125137.Google Scholar
Tarescavage, A. M., Glassmire, D. M., & Burchett, D. (2016). Introduction of a conceptual model for integrating the MMPI-2-RF into HCR-20V3 violence risk assessments and associations between the MMPI-2-RF and institutional violence. Law and Human Behavior, 40, 626637.Google Scholar
Tarescavage, A. M., Luna-Jones, L., & Ben-Porath, Y. S. (2014). Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) predictors of violating probation after felonious crimes. Psychological Assessment, 26, 13751380.Google Scholar
Tarescavage, A. M., Scheman, J., & Ben-Porath, Y. S. (2015). Reliability and validity of the Minneapolis Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) in evaluations of chronic low back pain patients. Psychological Assessment, 27, 433446.Google Scholar
Tarescavage, A. M., Scheman, J., & Ben-Porath, Y. S. (2018). Prospective comparison of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) and MMPI-2 Restructured From (MMPI-2-RF) in predicting treatment outcomes among patients with chronic low back pain. Journal of Clinical Psychology in Medical Settings, 25, 6679.Google Scholar
Tellegen, A. (1985). Structures of mood and personality and their relevance to assessing anxiety, with an emphasis on self-report. In Tuna, A. H. & Maser, J. D. (Eds.), Anxiety and the anxiety disorders (pp. 681706). Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
Tellegen, A., & Ben-Porath, Y. S. (2008/2011). MMPI-2-RF (Minnesota Multiphasic Personality Inventory-2-Restructured Form): Technical manual. Minneapolis: University of Minnesota Press.Google Scholar
Tellegen, A., Ben-Porath, Y. S., McNulty, J. L., Arbisi, P. A., Graham, J. R., & Kaemmer, B. (2003). MMPI-2 Restructured Clinical (RC) Scales: Development, validation, and interpretation. Minneapolis: University of Minnesota Press.Google Scholar
Tellegen, A., Ben-Porath, Y. S., & Sellbom, M. (2009). Construct validity of the MMPI-2 Restructured Clinical (RC) Scales: Reply to Rouse, Greene, Butcher, Nichols, & Williams. Journal of Personality Assessment, 91, 211221.Google Scholar
Van der Heijden, P. T., Rossi, G. M., Van der Veld, M. M., Derksen, J. J. L., & Egger, J. I. M. (2013). Personality and psychopathology: Higher order relations between the Five Factor Model of personality and the MMPI-2 Restructured Form. Journal of Research in Personality, 47, 572579.Google Scholar
Watson, C., Quilty, L. C., & Bagby, R. M. (2011). Differentiating bipolar disorder from major depressive disorder using the MMPI-2-RF: A receiver operating characteristics (ROC) analysis. Journal of Psychopathology and Behavioral Assessment, 33, 368374.Google Scholar
Whitman, M. R., Tarescavage, A. M., Glassmire, D. M., Burchett, D., & Sellbom, M. (2019). Examination of differential validity of MMPI-2-RF scores by gender and ethnicity in predicting future suicidal and violent behaviors on a forensic sample. Psychological Assessment, 31, 404409.Google Scholar
Wiggins, J. S. (1966). Substantive dimensions of self-report in the MMPI item pool. Psychological Monographs, 80.Google Scholar
Wolf, E. J., Miller, M. W., Orazem, R. J., Weierich, M. R., Castillo, D. T., Milford, J., et al. (2008). The MMPI-2 Restructured Clinical Scales in the assessment of Posttraumatic Stress Disorder and comorbid disorders. Psychological Assessment, 20, 327340.Google Scholar
Zahn, N., Sellbom, M., Pymont, C., & Schenk, P. W. (2017). Associations between MMPI-2-RF scale scores and self-reported personality disorder criteria in a private practice sample. Journal of Psychopathology and Behavioral Assessment, 39, 723741.Google Scholar

References

Alterman, A. I., Zaballero, A. R., Lin, M. M., Siddiqui, N., Brown, L. S., Rutherford, M. J., & McDermott, P. A. (1995). Personality Assessment Inventory (PAI) scores of lower-socioeconomic African American and Latino methadone maintenance patients. Assessment, 2, 91100.Google Scholar
Ambroz, A. (2005). Psychiatric disorders in disabled chronic low back pain workers’ compensation claimants. Utility of the Personality Assessment Inventory. Pain Medicine, 6, 190.Google Scholar
Ansell, E. B., Kurtz, J. E., DeMoor, R. M., & Markey, P. M. (2011). Validity of the PAI interpersonal scales for measuring the dimensions of the interpersonal circumplex. Journal of Personality Assessment, 93, 3339.Google Scholar
Archer, R. P., Buffington-Vollum, J. K., Stredny, R. V., & Handel, R. W. (2006). A survey of psychological test use patterns among forensic psychologists. Journal of Personality Assessment, 87, 8494.Google Scholar
Baer, R. A., & Wetter, M. W. (1997). Effects of information about validity scales on underreporting of symptoms on the Personality Assessment Inventory. Journal of Personality Assessment, 68, 402413.Google Scholar
Bagby, R. M., Nicholson, R. A., Bacchiochi, J. R., Ryder, A. G., & Bury, A. S. (2002). The predictive capacity of the MMPI-2 and PAI validity scales and indexes to detect coached and uncoached feigning. Journal of Personality Assessment, 78, 6986.Google Scholar
Bartoi, M. G., Kinder, B. N., & Tomianovic, D. (2000). Interaction effects of emotional status and sexual abuse on adult sexuality. Journal of Sex and Marital Therapy, 26, 123.Google Scholar
Blais, M. A., Baity, M. R., & Hopwood, C. J. (Eds.). (2010). Clinical applications of the Personality Assessment Inventory. New York: Routledge.Google Scholar
Blanchard, D. D., McGrath, R. E., Pogge, D. L., & Khadivi, A. (2003). A comparison of the PAI and MMPI-2 as predictors of faking bad in college students. Journal of Personality Assessment, 80, 197205.Google Scholar
Boccaccini, M. T., Murrie, D. C., Hawes, S. W., Simpler, A., & Johnson, J. (2010). Predicting recidivism with the Personality Assessment Inventory in a sample of sex offenders screened for civil commitment as sexually violent predators. Psychological Assessment, 22, 142148.Google Scholar
Boyle, G. J. & Lennon, T. (1994). Examination of the reliability and validity of the Personality Assessment Inventory. Journal of Psychopathology and Behavioral Assessment, 16, 173187.Google Scholar
Briere, J., Weathers, F. W., & Runtz, M. (2005). Is dissociation a multidimensional construct? Data from the Multiscale Dissociation Inventory. Journal of Traumatic Stress, 18, 221231.Google Scholar
Calhoun, P. S., Collie, C. F., Clancy, C. P., Braxton, L. E., & Beckham, J. C. (2010). Use of the PAI in assessment of post-traumatic stress disorder among help-seeking veterans. In Blais, M. A, Baity, M. R, & Hopwood, C. J. (Eds.), Clinical applications of the Personality Assessment Inventory (pp. 93112). New York: Routledge.Google Scholar
Caperton, J. D., Edens, J. F., & Johnson, J. K. (2004). Predicting sex offender institutional adjustment and treatment compliance using the Personality Assessment Inventory. Psychological Assessment, 16, 187191.Google Scholar
Cashel, M. L., Rogers, R., Sewell, K., & Martin-Cannici, C. (1995). The Personality Assessment Inventory and the detection of defensiveness. Assessment, 2, 333342.Google Scholar
Chang, J., & Smith, S. R. (2015). An exploration of how Asian Americans respond on the Personality Assessment Inventory. Asian American Journal of Psychology, 6, 2530.Google Scholar
Charnas, J. W., Hilsenroth, M. J., Zodan, J., & Blais, M. A. (2010). Should I stay or should I go? Personality Assessment Inventory and Rorschach indices of early withdrawal from psychotherapy. Psychotherapy: Theory, Research, Practice, Training, 47, 484499.Google Scholar
Cheng, M. K., Frank, J. B., & Hopwood, C. J. (2010). Assessment of motor vehicle accident claimants with the PAI. In Blais, M. A, Baity, M. R, & Hopwood, C. J. (Eds.), Clinical applications of the Personality Assessment Inventory (pp. 177194). New York: Routledge.Google Scholar
Clark, M. E., Gironda, R. J., & Young, R. W. (2003). Detection of back random responding: Effectiveness of MMPI-2 and Personality Assessment Inventory validity indices. Psychological Assessment, 15, 223234.Google Scholar
Clark, T. S., Oslund, S. R., & Hopwood, C. J. (2010). PAI assessment in medical settings. In Blais, M. A, Baity, M. R, & Hopwood, C. J. (Eds.), Clinical applications of the Personality Assessment Inventory (pp. 149162). New York: Routledge.Google Scholar
Clarkin, J. F., & Levy, K. N. (2004). The influence of client variables on psychotherapy. In Lambert, M. J. (Ed.), Handbook of psychotherapy and behaviour change (5th ed., pp. 194226). New York: John Wiley & Sons.Google Scholar
Combs, D. R., & Penn, D. L. (2004). The role of subclinical paranoia on social perception and behavior. Schizophrenia Research, 69, 93104.Google Scholar
Constantino, M. J., Castonguay, L. G., & Schut, A. J. (2002). The working alliance: A flagship for the “scientist-practitioner” model in psychotherapy. In Tyron, G. S. (Ed.), Counseling based on process research: Applying what we know (pp. 81131). Boston, MA: Allyn & Bacon.Google Scholar
Corsica, J. A., Azarbad, L., McGill, K., Wool, L., & Hood, M. (2010). The Personality Assessment Inventory: Clinical utility, psychometric properties, and normative data for bariatric surgery candidates. Obesity Surgery, 20, 722731.Google Scholar
Costa, P. T., & McCrae, R. R. (1992). Normal personality assessment in clinical practice: The NEO Personality Inventory. Psychological Assessment, 4, 513.Google Scholar
DeCoster-Martin, E., Weiss, W. U., Davis, R. D., & Rostow, C. D. (2004). Compulsive traits and police officer performance. Journal of Police and Criminal Psychology, 19, 6471.Google Scholar
DeShong, H. L., & Kurtz, J. E. (2013). Four factors of impulsivity differentiate antisocial and borderline personality disorders. Journal of Personality Disorders, 27, 144–56.Google Scholar
Douglas, K. S., Hart, S. D., & Kropp, P. R. (2001). Validity of the Personality Assessment Inventory for forensic assessments. International Journal of Offender Therapy and Comparative Criminology, 45, 183197.Google Scholar
Edens, J. F., Hart, S. D., Johnson, D. W., Johnson, J. K., & Olver, M. E. (2000). Use of the Personality Assessment Inventory to assess psychopathy in offender populations. Psychological Assessment, 12, 132139.Google Scholar
Edens, J. F., Poythress, N. G., & Watkins-Clay, M. M. (2007). Detection of malingering in psychiatric unit and general population prison inmates: A comparison of the PAI, SIMS, and SIRS. Journal of Personality Assessment, 88, 3342.Google Scholar
Edens, J. F., & Ruiz, M. A. (2005). PAI interpretive report for correctional settings (PAI-CS). Odessa, FL: Psychological Assessment Resources.Google Scholar
Edens, J. F., & Ruiz, M. A. (2006). On the validity of validity scales: The importance of defensive responding in the prediction of institutional misconduct. Psychological Assessment, 18, 220224.Google Scholar
Estrada, A. R., & Smith, S. R. (2017). An exploration of Latina/o respondent scores on the Personality Assessment Inventory. Current Psychology, 38, 782-791.Google Scholar
Fals-Stewart, W. (1996). The ability of individuals with psychoactive substance use disorders to escape detection by the Personality Assessment Inventory. Psychological Assessment, 8, 6068.Google Scholar
Fernandez, K., Boccaccini, M. T., & Noland, R. M. (2008). Detecting over-and underreporting of psychopathology with the Spanish-language Personality Assessment Inventory: Findings from a simulation study with bilingual speakers. Psychological Assessment, 20, 189194.Google Scholar
Gardner, B. O., Boccaccini, M. T., Bitting, B. S., & Edens, J. F. (2015). Personality Assessment Inventory scores as predictors of misconduct, recidivism, and violence: A meta-analytic review. Psychological Assessment, 27, 534544.Google Scholar
Gay, N. W., & Combs, D. R. (2005). Social behaviors in persons with and without persecutory delusions. Schizophrenia Research, 80, 361362.Google Scholar
Groves, J. A., & Engel, R. R. (2007). The German adaptation and standardization of the Personality Assessment Inventory (PAI). Journal of Personality Assessment, 88, 4956.Google Scholar
Hare, R. D. (1991). The Hare Psychopathy Checklist– revised. Toronto, ON: Multi-Health Systems.Google Scholar
Harley, R. M., Baity, M. R., Blais, M. A., & Jacobo, M. C. (2007). Use of dialectical behavior therapy skills training for borderline personality disorder in a naturalistic setting. Psychotherapy Research, 17, 351358.Google Scholar
Hart, S. D., Cox, D. N., & Hare, R. D. (1995). Manual for the Psychopathy Checklist – Screening Version (PCL: SV). Unpublished manuscript, University of British Columbia.Google Scholar
Hawes, S. W., & Boccaccini, M. T. (2009). Detection of overreporting of psychopathology on the Personality Assessment Inventory: A meta-analytic review. Psychological Assessment, 21, 112124.Google Scholar
Hodgins, D. C., Schopflocher, D. P., Martin, C. R., el-Guebaly, N., Casey, D. M., Currie, S. R. … & Williams, R. J. (2012). Disordered gambling among higher-frequency gamblers: Who is at risk? Psychological Medicine, 42, 2433–44.Google Scholar
Hopwood, C. J., Ambwani, S., & Morey, L. C. (2007). Predicting non-mutual therapy termination with the Personality Assessment Inventory. Psychotherapy Research, 17, 706712.Google Scholar
Hopwood, C. J., Creech, S., Clark, T. S., Meagher, M. W., & Morey, L. C. (2008). Predicting the completion of an integrative and intensive outpatient chronic pain treatment. Journal of Personality Assessment, 90, 7680.Google Scholar
Hopwood, C. J., Flato, C. G., Ambwani, S., Garland, B. H., & Morey, L. C. (2009). A comparison of Latino and Anglo socially desirable responding. Journal of Clinical Psychology, 65, 769780.Google Scholar
Hopwood, C. J., & Morey, L. C. (2007). Psychological conflict in borderline personality as represented by inconsistent self–report item responding. Journal of Social and Clinical Psychology, 26, 10651075.Google Scholar
Hopwood, C. J., Morey, L. C., Rogers, R., & Sewell, K. W. (2007). Malingering on the PAI: The detection of feigned disorders. Journal of Personality Assessment, 88, 4348.Google Scholar
Hopwood, C. J., Orlando, M. J., & Clark, T. S. (2010). The detection of malingered pain-related disability with the Personality Assessment Inventory. Rehabilitation Psychology, 55, 307310.Google Scholar
Hovey, J. D., & Magaña, C. G. (2002). Psychosocial predictors of anxiety among immigrant Mexican Migrant Farmworkers: Implications for prevention and treatment. Cultural Diversity and Ethnic Minority Psychology, 8, 274289.Google Scholar
Jackson, D. N. (1970). A sequential system for personality scale development. In Spielberger, C. D. (Ed.). Current topics in clinical and community psychology, vol. 2 (pp. 6297). New York: Academic Press.Google Scholar
Karlin, B. E., Creech, S. K., Grimes, J. S., Clark, T. S., Meagher, M.W., & Morey, L. C. (2005). The Personality Assessment Inventory with chronic pain patients: Psychometric properties and clinical utility. Journal of Clinical Psychology, 61, 15711585.Google Scholar
Keeley, R., Smith, M., & Miller, J. (2000). Somatoform symptoms and treatment nonadherence in depressed family medicine outpatients. Archives of Family Medicine, 9, 4654.Google Scholar
Keiski, M. A., Shore, D. L., & Hamilton, J. M. (2003). CVLT-II performance in depressed versus nondepressed TBI subjects. The Clinical Neuropsychologist, 17, 107.Google Scholar
Kerr, P. L., & Muehlenkamp, J. J. (2010). Features of psychopathology in self-injuring female college students. Journal of Mental Health Counseling, 32, 290308.Google Scholar
Kiesler, D. (1996). Contemporary interpersonal theory and research: Personality, psychopathology, and psychotherapy. New York: Wiley.Google Scholar
Killgore, W. D., Sonis, L. A., Rosso, I. M., & Rauch, S. L. (2016). Emotional intelligence partially mediates the association between anxiety sensitivity and anxiety symptoms. Psychological Reports, 118, 2340.Google Scholar
Klonsky, E. D. (2004). Performance of Personality Assessment Inventory and Rorschach indices of schizophrenia in a public psychiatric hospital. Psychological Services, 1, 107110.Google Scholar
Kurtz, J. E., Henk, C. M., Bupp, L. L., & Dresler, C. M. (2015). The validity of a regression-based procedure for detecting concealed psychopathology in structured personality assessment. Psychological Assessment, 27, 392402.Google Scholar
Lally, S. J. (2003). What tests are acceptable for use in forensic evaluations? A survey of experts. Professional Psychology: Research and Practice, 34, 491498.Google Scholar
Liljequist, L., Kinder, B. N., & Schinka, J. A. (1998). An investigation of malingering posttraumatic stress disorder on the Personality Assessment Inventory. Journal of Personality Assessment, 71, 322336.Google Scholar
Locke, D. E. C., Kirlin, K. A., Wershba, R., Osborne, D., Drazkowski, J. F., Sirven, J. I., & Noe, K. H. (2011). Randomized comparison of the Personality Assessment Inventory and the Minnesota Multiphasic Personality Inventory-2 in the epilepsy monitoring unit. Epilepsy and Behavior, 21, 397401.Google Scholar
Loevinger, J. (1957). Objective tests as instruments of psychological theory. Psychological Reports, 3, 635694.Google Scholar
Loving, J. L., & Lee, A. J. (2006). Use of the Personality Assessment Inventory in parenting capacity evaluations. Paper presented at the Society of Personality Assessment Annual Conference, San Diego, CA, March 22–26.Google Scholar
Lowmaster, S. E., & Morey, L. C. (2012). Predicting law enforcement officer job performance with the Personality Assessment Inventory. Journal of Personality Assessment, 94, 254261.Google Scholar
Lyrakos, D. G. (2011). The development of the Greek Personality Assessment Inventory. Psychology, 2, 797803.Google Scholar
Matlasz, T. M., Brylski, J. L., Leidenfrost, C. M., Scalco, M., Sinclair, S. J., Schoelerman, R. M., … & Antonius, D. (2017). Cognitive status and profile validity on the Personality Assessment Inventory (PAI) in offenders with serious mental illness. International Journal of Law and Psychiatry, 50, 3844.Google Scholar
McCrae, R. R., Costa, P. T., & Martin, T. A. (2005). The NEO–PI–3: A more readable revised NEO personality inventory. Journal of Personality Assessment, 84, 261270.Google Scholar
McLellan, A. T., Kushner, H., Metzger, D., Peters, R., Smith, I., Grissom, G., … & Argeriou, M. (1992). The fifth edition of the Addiction Severity Index. Journal of Substance Abuse Treatment, 9, 199213.Google Scholar
McDevitt-Murphy, M., Weathers, F. W., Adkins, J. W., & Daniels, J. B. (2005). Use of the Personality Assessment Inventory in assessment of posttraumatic stress disorder in women. Journal of Psychopathology and Behavioral Assessment, 27, 5765.Google Scholar
Moadel, D., Doucet, G. E., Pustina, D., Rider, R., Taylor, N., Barnett, P., … Tracy, J. L. (2015). Emotional/psychiatric symptom change and amygdala volume after anterior temporal lobectomy. JHN Journal, 10, 1214.Google Scholar
Mogge, N. L., LePage, J. S., Bella, T., & Ragatzc, L. (2010). The Negative Distortion Scale: A new PAI validity scale. The Journal of Forensic Psychiatry and Psychology, 21, 7790.Google Scholar
Morey, L. C. (1991). Personality Assessment Inventory professional manual. Odessa, FL: Psychological Assessment Resources.Google Scholar
Morey, L. C. (1996). An interpretive guide to the Personality Assessment Inventory. Odessa, FL: Psychological Assessment Resources.Google Scholar
Morey, L. C. (2000). PAI software portfolio manual. Odessa, FL: Psychological Assessment Resources.Google Scholar
Morey, L. C. (2003). Essentials of PAI assessment. New York: John Wiley.Google Scholar
Morey, L. C. (2007a). Personality Assessment Inventory professional manual (2nd ed.). Lutz, FL: Psychological Assessment Resources.Google Scholar
Morey, L. C. (2007b). Personality Assessment Inventory – Adolescent (PAI-A). Lutz, FL: Psychological Assessment Resources.Google Scholar
Morey, L. C., & Hopwood, C. J. (2004). Efficiency of a strategy for detecting back random responding on the Personality Assessment Inventory. Psychological Assessment, 16, 197200.Google Scholar
Morey, L. C., & Hopwood, C. J. (2007). Casebook for the Personality Assessment Inventory: A structural summary approach. Lutz, FL: Psychological Assessment Resources.Google Scholar
Morey, L. C., & Lanier, V. W. (1998). Operating characteristics for six response distortion indicators for the Personality Assessment Inventory. Assessment, 5, 203214.Google Scholar
Morey, L. C., Lowmaster, S. E., Coldren, R. L., Kelly, M. P., Parish, R. V., & Russell, M. L. (2011). Personality Assessment Inventory profiles of deployed combat troops: An empirical investigation of normative performance. Psychological Assessment, 23, 456462.Google Scholar
Morey, L. C., Warner, M. B., & Hopwood, C. J. (2006). The Personality Assessment Inventory: Issues in legal and forensic settings. In Goldstein, A. (Ed.) Forensic Psychology: Advanced Topics for Forensic Mental Experts and Attorneys (pp. 97126). Hoboken, NJ: John Wiley & Sons.Google Scholar
Osborne, D. (1994). Use of the Personality Assessment Inventory with a medical population. Paper presented at the meetings of the Rocky Mountain Psychological Association, Denver, CO.Google Scholar
Parker, J. D., Daleiden, E. L., & Simpson, C. A. (1999). Personality Assessment Inventory substance-use scales: Convergent and discriminant relations with the Addiction Severity Index in a residential chemical dependence treatment setting. Psychological Assessment, 11, 507513.Google Scholar
Patry, M. W., & Magaletta, P. R. (2015). Measuring suicidality using the Personality Assessment Inventory: A convergent validity study with federal inmates. Assessment, 22, 3645.Google Scholar
Peebles, J., & Moore, R. J. (1998). Detecting socially desirable responding with the Personality Assessment Inventory: The Positive Impression Management Scale and the Defensiveness Index. Journal of Clinical Psychology, 54, 621628.Google Scholar
Percosky, A. B., Boccaccini, M. T., Bitting, B. S., & Hamilton, P. M. (2013). Personality Assessment Inventory scores as predictors of treatment compliance and misconduct among sex offenders participating in community-based treatment. Journal of Forensic Psychology Practice, 13, 192203.Google Scholar
Pincus, A. L. (2005). A contemporary integrative theory of personality disorders. In Lenzenweger, M. F. & Clarkin, J. F. (Eds.), Major theories of personality disorder (pp. 282331). New York: Guilford Press.Google Scholar
Purdom, C. L., Kirlin, K. A., Hoerth, M. T., Noe, K. H., Drazkowski, J. F., Sirven, J. I., & Locke, D. E. (2012). The influence of impression management scales on the Personality Assessment Inventory in the epilepsy monitoring unit. Epilepsy and Behavior, 25, 534538.Google Scholar
Reidy, T. J., Sorensen, J. R., & Davidson, M. (2016). Testing the predictive validity of the Personality Assessment Inventory (PAI) in relation to inmate misconduct and violence. Psychological Assessment, 28, 871884.Google Scholar
Roberts, M. D., Thompson, J. A., & Johnson, M. (2000). PAI law enforcement, corrections, and public safety selection report module. Odessa, FL: Psychological Assessment Resources.Google Scholar
Rogers, R., Flores, J., Ustad, K., & Sewell, K. W. (1995). Initial validation of the Personality Assessment Inventory-Spanish Version with clients from Mexican American communities. Journal of Personality Assessment, 64, 340348.Google Scholar
Rogers, R., Gillard, N. D., Wooley, C. N., & Kelsey, K. R. (2013). Cross-validation of the PAI Negative Distortion Scale for feigned mental disorders: A research report. Assessment, 20, 3642.Google Scholar
Rogers, R., Gillard, N. D., Wooley, C. N., & Ross, C. A. (2012). The detection of feigned disabilities: The effectiveness of the Personality Assessment Inventory in a traumatized inpatient sample. Assessment, 19, 7788.Google Scholar
Rogers, R., Ornduff, S. R., & Sewell, K. (1993). Feigning specific disorders: A study of the Personality Assessment Inventory (PAI). Journal of Personality Assessment, 60, 554560.Google Scholar
Rogers, R., Sewell, K. W., Morey, L. C., & Ustad, K. L. (1996). Detection of feigned mental disorders on the Personality Assessment Inventory: A discriminant analysis. Journal of Personality Assessment, 67, 629640.Google Scholar
Ruiz, M. A., Cox, J., Magyar, M. S., & Edens, J. F. (2014). Predictive validity of the Personality Assessment Inventory (PAI) for identifying criminal reoffending following completion of an in-jail addiction treatment program. Psychological Assessment, 26, 673678.Google Scholar
Ruiz, M. A., Dickinson, K. A., & Pincus, A. L. (2002). Concurrent validity of the Personality Assessment Inventory Alcohol Problems (ALC) Scale in a college student sample. Assessment, 9, 261270.Google Scholar
Shorey, R. C., Gawrysiak, M. J., Anderson, S., & Stuart, G. L. (2015). Dispositional mindfulness, spirituality, and substance use in predicting depressive symptoms in a treatment‐seeking sample. Journal of Clinical Psychology, 71, 334345.Google Scholar
Siefert, C. J., Kehl-Fie, K., Blais, M. A., & Chriki, L. (2007). Detecting back irrelevant responding on the Personality Assessment Inventory in a psychiatric inpatient setting. Psychological Assessment, 19, 469473.Google Scholar
Siefert, C. J., Sinclair, S. J., Kehl-Fie, K. A., & Blais, M. A. (2009). An item-level psychometric analysis of the Personality Assessment Inventory: Clinical scales in a psychiatric inpatient unit. Assessment, 16, 373383.Google Scholar
Sims, J. A., Thomas, K. M., Hopwood, C. J., Chen, S. H., & Pascale, C. (2013). Psychometric properties and norms for the Personality Assessment Inventory in egg donors and gestational carriers. Journal of Personality Assessment, 95, 495499.Google Scholar
Sinclair, S. J., Bello, I., Nyer, M., Slavin-Mulford, J., Stein, M. B., Renna, M., … & Blais, M. A. (2012). The Suicide (SPI) and Violence Potential Indices (VPI) from the Personality Assessment Inventory: A preliminary exploration of validity in an outpatient psychiatric sample. Journal of Psychopathology and Behavioral Assessment, 34, 423431.Google Scholar
Stedman, J. M., McGeary, C. A., & Essery, J. (2018). Current patterns of training in personality assessment during internship. Journal of Clinical Psychology, 74, 398406.Google Scholar
Stein, M. B., Pinsker-Aspen, J., & Hilsenroth, M. J. (2007). Borderline pathology and the Personality Assessment Inventory (PAI): An evaluation of criterion and concurrent validity. Journal of Personality Assessment, 88, 8189.Google Scholar
Tasca, G. A., Wood, J., Demidenko, N., & Bissada, H. (2002). Using the PAI with an eating disordered population: Scale characteristics, factor structure and differences among diagnostic groups. Journal of Personality Assessment, 79, 337356.Google Scholar
Thomas, K. M., Hopwood, C. J., Orlando, M. J., Weathers, F. W., & McDevitt-Murphy, M. E. (2012). Detecting feigned PTSD using the Personality Assessment Inventory. Psychological Injury and the Law, 5, 192201.Google Scholar
Tkachenko, O., Olson, E. A., Weber, M., Preer, L. A., Gogel, H., & Killgore, W. D. S. (2014). Sleep difficulties are associated with increased symptoms of psychopathology. Experimental Brain Research, 232, 15671574.Google Scholar
Tombaugh, T. N. (1996). Test of memory malingering: TOMM. North Tonawanda, NY: Multi-Health Systems.Google Scholar
Tracey, T. J. (1993). An interpersonal stage model of therapeutic process. Journal of Counseling Psychology, 40, 396409.Google Scholar
Wagner, M. T., Wymer, J. H., Topping, K. B., & Pritchard, P. B. (2005). Use of the Personality Assessment Inventory as an efficacious and cost-effective diagnostic tool for nonepileptic seizures. Epilepsy and Behavior, 7, 301304.Google Scholar
Wang, E. W., Rogers, R., Giles, C. L., Diamond, P. M., Herrington‐Wang, L. E., & Taylor, E. R. (1997). A pilot study of the Personality Assessment Inventory (PAI) in corrections: Assessment of malingering, suicide risk, and aggression in male inmates. Behavioral Sciences and the Law, 15, 469482.Google Scholar
Weiss, P. A. (2010). Use of the PAI in personnel selection. In Blais, M. A., Baity, M. R., &, Hopwood, C. J. (Eds.), Clinical applications of the Personality Assessment Inventory (pp. 163176). New York: Routledge.Google Scholar
Whiteside, D., Clinton, C., Diamonti, C., Stroemel, J., White, C., Zimberoff, A., & Waters, D. (2010). Relationship between suboptimal cognitive effort and the clinical scales of the Personality Assessment Inventory. The Clinical Neuropsychologist, 24, 315325.Google Scholar
Whiteside, D. M., Galbreath, J., Brown, M., & Turnbull, J. (2012). Differential response patterns on the Personality Assessment Inventory (PAI) in compensation-seeking and non-compensation-seeking mild traumatic brain injury patients. Journal of Clinical and Experimental Neuropsychology, 34, 172182.Google Scholar
Woods, D. W., Wetterneck, C. T., & Flessner, C. A. (2006). A controlled evaluation of acceptance and commitment therapy plus habit reversal for trichotillomania. Behaviour Research and Therapy, 44, 639656.Google Scholar
Wooley, C. N., & Rogers, R. (2015). The effectiveness of the Personality Assessment Inventory with feigned PTSD: An initial investigation of Resnick’s model of malingering. Assessment, 22, 449458.Google Scholar

References

Ackerman, S. J., Hilsenroth, M. J., Bairy, M. R., & Blagys, M. D. (2000). Interaction of therapeutic process and alliance during psychological assessment. Journal of Personality Assessment, 75 (1), 82109.Google Scholar
American Psychiatric Association. (1980). Diagnostic and statistical manual of mental disorders (3rd ed.). Washington, DC: Author.Google Scholar
American Psychiatric Association. (1994). Diagnostic and statistical manual of mental disorders (4th ed.). Washington, DC: Author.Google Scholar
American Psychiatric Association. (2013). Diagnostic and Statistical Manual of Mental Disorders (5th ed.). Washington, DC: Author.Google Scholar
Ben-Porath, Y., & Tellegen, A. (2008). MMPI-2-RF manual for administration, scoring and interpretation. Minneapolis: University of Minnesota Press.Google Scholar
Choca, J. P. (2004). Interpretive guide to the Millon Clinical Multiaxial Inventory (3rd ed.). Washington, DC: American Psychological Association.Google Scholar
Choca, J. P., & Grossman, S. (2015). Evolution of the Millon Clinical Multiaxial Inventory. Journal of Personality Assessment, 97, 541549.Google Scholar
Craig, R. J. (1999). Overview and status of the Millon Clinical Multiaxial Inventory. Journal of Personality Assessment, 72, 390406.Google Scholar
Derogatis, L. (1993). Brief Symptom Inventory (BSI) manual. Minneapolis, MN: National Computer Systems.Google Scholar
Freeman, R. C., Lewis, Y. P., & Colon, H. M. (2002). Instrumentation, data collection, and analysis issues. In Freeman, R. C., Lewis, Y.P., & Colon, H. M. (Eds.). Handbook for conducting drug abuse research with Hispanic populations (pp. 167188). Westport, CT: Praeger.Google Scholar
Grossman, S. D. (2004). Facets of personality: A proposal for the development of MCMI-III content scales (Doctoral Dissertation, Carlos Albizu University, 2004). Dissertation Abstracts International, 65, 5401.Google Scholar
Grossman, S. D. (2015). Millon’s evolutionary model of personality assessment: A case for categorical/dimensional prototypes. Journal of Personality Assessment, 97, 436445.Google Scholar
Grossman, S., & Amendolace, B. (2017). Essentials of MCMI-IV Assessment. Hoboken, NJ: Wiley.Google Scholar
Hoyle, R. H. (1991). Evaluating measurement models in clinical research: Covariance structure analysis of latent variable models of self-conception. Journal of Consulting and Clinical Psychology, 59, 6776.Google Scholar
Jamison, K. A. (2005). Exuberance (The passion for life). New York: Knopf.Google Scholar
Klein, D. N., Schwartz, J. E., Rose, S., & Leader, J. B. (2000). Five-year course and outcome of dysthymic disorder: A prospective, naturalistic follow-up study. American Journal of Psychiatry, 157, 931939.Google Scholar
Kraepelin, E. (1921). Manic-depressive insanity and paranoia. Edinburgh: Livingstone.Google Scholar
Millon, T. (1969). Modern Psychopathology. Philadelphia, PA: Saunders.Google Scholar
Millon, T. (1977). Millon Clinical Multiaxial Inventory. Minneapolis, MN: National Computer Systems.Google Scholar
Millon, T. (1990). Toward a new personology: An evolutionary model. New York: Wiley.Google Scholar
Millon, T. (1999). Personality-guided therapy. New York: Wiley.Google Scholar
Millon, T. (2002). A blessed and charmed personal odyssey. Journal of Personality Assessment, 79, 171194.Google Scholar
Millon, T. (2011). Disorders of personality: Introducing a DSM-ICD spectrum from normal to abnormal. Hoboken, NJ: Wiley.Google Scholar
Millon, T., & Davis, R. D. (1996). Disorders of personality: DSM-IV and beyond. New York: Wiley.Google Scholar
Millon, T., Davis, R., Millon, C., & Grossman, S. (2009). Millon Clinical Multiaxial Inventory-III manual (4th ed.). Minneapolis, MN: NCS Pearson Assessments.Google Scholar
Millon, T., & Grossman, S. D. (2007a). Resolving difficult clinical syndromes: A personalized psychotherapy approach. Hoboken, NJ: Wiley.Google Scholar
Millon, T., & Grossman, S. D. (2007b). Overcoming resistant personality disorders: A personalized psychotherapy approach. Hoboken, NJ: Wiley.Google Scholar
Millon, T., & Grossman, S. D. (2007c). Moderating severe personality disorders: A personalized psychotherapy approach. Hoboken, NJ: Wiley.Google Scholar
Millon, T., Grossman, S., & Millon, C. (2015). Millon Clinical Multiaxial Inventory-IV manual. Minneapolis, MN: Pearson Assessments.Google Scholar
Millon, T., Grossman, S., Millon, C., Meagher, S., & Ramnath, R. (2004). Personality disorders in modern life. Hoboken, NJ: Wiley.Google Scholar
Piotrowski, C., & Keller, J. W. (1989). Psychological testing in outpatient mental health facilities: A national study. Professional Psychology: Research and Practice, 20, 423425.Google Scholar
Piotrowski, C., & Lubin, B. (1989). Assessment practices of Division 38 practitioners. Health Psychologist, 11, 1.Google Scholar
Piotrowski, C., & Lubin, B. (1990). Assessment practices of health psychologists: Survey of APA Division 38 clinicians. Professional Psychology: Research and Practice, 21, 99106.Google Scholar
Retzlaff, P. D. (1995). Clinical Application of the MCMI-III. In Retzlaff, P. D. (Ed.), Tactical psychotherapy of the personality disorders: An MCMI-III approach (pp. 123). Needham, MA: Allyn & Bacon.Google Scholar
Rossi, G., Van den Brande, L, Tobac, A., Sloore, H., & Hauben, C. (2003). Convergent validity of the MCMI-III personality disorder scales and the MMPI-2 scales. Journal of Personality Disorders, 17, 330340.Google Scholar
Somwaru, D. P., & Ben-Porath, Y. S. (1995). Development and reliability of MMPI-2 based personality disorder scales. Paper presented at the 30th Annual Workshop and Symposium on Recent Developments in Use of the MMPI-2 &MMPI-A, St. Petersburg Beach, FL.Google Scholar
Suarez-Morales, L., & Beitra, D. (2013). Assessing substance-related disorders in Hispanic clients. In Benuto, L. T. (Ed.), Guide to psychological assessment with Hispanics (pp. 163181). New York: Springer.Google Scholar
Wetzler, S. (1990). The Millon Clinical Multiaxial Inventory (MCMI): A review. Journal of Personality Assessment, 55, 445464.Google Scholar

References

Achenbach, T. M., Krukowski, R. A., Dumenci, L., & Ivanova, M. Y. (2005). Assessment of adult psychopathology: Meta-analyses and implications of cross-informant correlations. Psychological Bulletin, 131(3), 361382.Google Scholar
Ali, G.-C., Ryan, G., & De Silva, M. J. (2016). Validated screening tools for common mental disorders in low and middle income countries: A systematic review. PLoS ONE, 11(6), e0156939. https://doi.org/10.1371/journal.pone.0156939Google Scholar
American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Arlington, VA: American Psychiatric Publishing.Google Scholar
Bardhoshi, G., Duncan, K., & Erford, B. T. (2016). Psychometric meta-analysis of the English version of the Beck Anxiety Inventory. Journal of Counseling and Development, 94(3), 356373. https://doi.org/10.1002/jcad.12090Google Scholar
Barnes, L. L. B., Harp, D., & Jung, W. S. (2002). Reliability generalization of scores on the Spielberger State-Trait Anxiety Inventory. Educational and Psychological Measurement, 62(4), 603618. https://doi.org/10.1177/0013164402062004005Google Scholar
Batterham, P. J. (2014). Recruitment of mental health survey participants using Internet advertising: Content, characteristics and cost effectiveness. International Journal of Methods in Psychiatric Research, 23(2), 184191. https://doi.org/10.1002/mpr.1421Google Scholar
Batterham, P. J., Brewer, J. L., Tjhin, A., Sunderland, M., Carragher, N., & Calear, A. L. (2015). Systematic item selection process applied to developing item pools for assessing multiple mental health problems. Journal of Clinical Epidemiology, 68(8), 913919. https://doi.org/10.1016/j.jclinepi.2015.03.022Google Scholar
Batterham, P. J., Ftanou, M., Pirkis, J., Brewer, J. L., Mackinnon, A. J., Beautrais, A., … Christensen, H. (2015). A systematic review and evaluation of measures for suicidal ideation and behaviors in population-based research. Psychological Assessment, 27(2), 501512. https://doi.org/10.1037/pas0000053Google Scholar
Batterham, P. J., Sunderland, M., Carragher, N., & Calear, A. L. (2016). Development and community-based validation of eight item banks to assess mental health. Psychiatry Research, 243, 452463. https://doi.org/10.1016/j.psychres.2016.07.011Google Scholar
Beard, C., & Björgvinsson, T. (2014). Beyond generalized anxiety disorder: Psychometric properties of the GAD-7 in a heterogeneous psychiatric sample. Journal of Anxiety Disorders, 28(6), 547552. https://doi.org/10.1016/J.JANXDIS.2014.06.002Google Scholar
Beck, A. T., Epstein, N., Brown, G., & Steer, R. A. (1988). An inventory for measuring clinical anxiety: Psychometric properties. Journal of Consulting and Clinical Psychology, 56, 893897.Google Scholar
Beck, A. T., Steer, R. A., Ball, R., & Ranieri, W. F. (1996). Comparison of Beck depression inventories -IA and -II in psychiatric outpatients. Journal of Personality Assessment, 67(3), 588597. https://doi.org/10.1207/s15327752jpa6703_13Google Scholar
Beesdo-Baum, K., Jenjahn, E., Höfler, M., Lueken, U., Becker, E. S., & Hoyer, J. (2012). Avoidance, safety behavior, and reassurance seeking in generalized anxiety disorder. Depression and Anxiety, 29(11), 948957. https://doi.org/10.1002/da.21955Google Scholar
Beesdo-Baum, K., Klotsche, J., Knappe, S., Craske, M. G., Lebeau, R. T., Hoyer, J., … Wittchen, H. U. (2012). Psychometric properties of the dimensional anxiety scales for DSM-V in an unselected sample of German treatment seeking patients. Depression and Anxiety, 29(12), 10141024. https://doi.org/10.1002/da.21994Google Scholar
Björgvinsson, T., Kertz, S. J., Bigda-Peyton, J. S., McCoy, K. L., & Aderka, I. M. (2013). Psychometric properties of the CES-D-10 in a psychiatric sample. Assessment, 20(4), 429436. https://doi.org/10.1177/1073191113481998Google Scholar
Bouchard, S., Pelletier, M.-H., Gauthier, J. G., Côté, G., & Laberge, B. (1997). The assessment of panic using self-report: A comprehensive survey of validated instruments. Journal of Anxiety Disorders, 11(1), 89111. https://doi.org/10.1016/S0887–6185(96)00037–0Google Scholar
Brown, T. A. (2003). Confirmatory factor analysis of the Penn State Worry Questionnaire: Multiple factors or method effects? Behaviour Research and Therapy, 41, 14111426. https://doi.org/10.1016/S0005–7967(03)00059–7Google Scholar
Byrne, B. M., Stewart, S. M., Kennard, B. D., & Lee, P. W. H. (2007). The Beck Depression Inventory-II: Testing for measurement equivalence and factor mean differences across Hong Kong and American adolescents. International Journal of Testing, 7(3), 293309. https://doi.org/10.1080/15305050701438058Google Scholar
Caci, H., Baylé, F. J., Dossios, C., Robert, P., & Boyer, P. (2003). The Spielberger trait anxiety inventory measures more than anxiety. European Psychiatry, 18(8), 394400. https://doi.org/10.1016/J.EURPSY.2003.05.003Google Scholar
Canel-Çınarbaş, D., Cui, Y., & Lauridsen, E. (2011). Cross-cultural validation of the Beck Depression Inventory–II across U.S. and Turkish samples. Measurement and Evaluation in Counseling and Development, 44(2), 7791. https://doi.org/10.1177/0748175611400289Google Scholar
Carleton, R. N., Thibodeau, M. A., Teale, M. J. N., Welch, P. G., Abrams, M. P., Robinson, T., & Asmundson, G. J. G. (2013). The Center for Epidemiologic Studies Depression Scale: A review with a theoretical and empirical examination of item content and factor structure. PLoS ONE, 8(3), e58067. https://doi.org/10.1371/journal.pone.0058067Google Scholar
Caspi, A., Houts, R. M., Belsky, D. W., Goldman-Mellor, S. J., Harrington, H., Israel, S., … Moffitt, T. E. (2014). The p factor: One general psychopathology factor in the structure of psychiatric disorders? Clinical Psychological Science: A Journal of the Association for Psychological Science, 2(2), 119137. https://doi.org/10.1177/2167702613497473Google Scholar
Cella, D., Gershon, R., Lai, J.-S., & Choi, S. (2007). The future of outcomes measurement: Item banking, tailored short-forms, and computerized adaptive assessment. Quality of Life Research, 16(S1), 133141. https://doi.org/10.1007/s11136-007–9204–6Google Scholar
Chan, D. (2009). So why ask me? Are self-report data really that bad? In Lance, C. E. & Vandenberg, R. J. (Eds.), Statistical and methodological myths and urban legends: Doctrine, verity and fable in organizational and social sciences (pp. 309336). New York: Routledge.Google Scholar
Chilcot, J., Rayner, L., Lee, W., Price, A., Goodwin, L., Monroe, B., … Hotopf, M. (2013). The factor structure of the PHQ-9 in palliative care. Journal of Psychosomatic Research, 75(1), 6064. https://doi.org/10.1016/j.jpsychores.2012.12.012Google Scholar
Choi, S. W., Reise, S. P., Pilkonis, P. A., Hays, R. D., & Cella, D. (2010). Efficiency of static and computer adaptive short forms compared to full-length measures of depressive symptoms. Quality of Life Research, 19(1), 125136. https://doi.org/10.1007/s11136-009–9560–5Google Scholar
Choi, S. W., Schalet, B., Cook, K. F., & Cella, D. (2014). Establishing a common metric for depressive symptoms: Linking the BDI-II, CES-D, and PHQ-9 to PROMIS Depression. Psychological Assessment, 26(2), 513527. https://doi.org/10.1037/a0035768Google Scholar
Conijn, J. M., van der Ark, L. A., & Spinhoven, P. (2017). Satisficing in mental health care patients: The effect of cognitive symptoms on self-report data quality. Assessment. https://doi.org/10.1177/1073191117714557Google Scholar
Creamer, M., Foran, J., & Bell, R. (1995). The Beck Anxiety Inventory in a non-clinical sample. Behaviour Research and Therapy, 33(4), 477–85.Google Scholar
Crockett, L. J., Randall, B. A., Shen, Y.-L., Russell, S. T., & Driscoll, A. K. (2005). Measurement equivalence of the Center for Epidemiological Studies Depression Scale for Latino and Anglo adolescents: A national study. Journal of Consulting and Clinical Psychology, 73(1), 4758. https://doi.org/10.1037/0022-006X.73.1.47Google Scholar
Croudace, T. J., & Böhnke, J. R. (2014). Item bank measurement of depression: Will one dimension work? Journal of Clinical Epidemiology, 67, 46 https://doi.org/10.1016/j.jclinepi.2013.08.002Google Scholar
Cunningham, J. A., Godinho, A., & Kushnir, V. (2017). Can Amazon’s Mechanical Turk be used to recruit participants for internet intervention trials? A pilot study involving a randomized controlled trial of a brief online intervention for hazardous alcohol use. Internet Interventions, 10, 1216. https://doi.org/10.1016/j.invent.2017.08.005Google Scholar
Curran, P. J., Hussong, A. M., Cai, L., Huang, W., Chassin, L., Sher, K. J., & Zucker, R. A. (2008). Pooling data from multiple longitudinal studies: The role of item response theory in integrative data analysis. Developmental Psychology, 44(2), 365–80. https://doi.org/10.1037/0012–1649.44.2.365Google Scholar
De Beurs, D. P., de Vries, A. L., de Groot, M. H., de Keijser, J., & Kerkhof, A. J. (2014). Applying computer adaptive testing to optimize online assessment of suicidal behavior: A simulation study. Journal of Medical Internet Research, 16(9), e207. https://doi.org/10.2196/jmir.3511Google Scholar
Del Vecchio, N., Elwy, A. R., Smith, E., Bottonari, K. A., & Eisen, S. V. (2011). Enhancing self-report assessment of PTSD: Development of an item bank. Journal of Traumatic Stress, 24(2), 191199. https://doi.org/10.1002/jts.20611Google Scholar
Demirchyan, A., Petrosyan, V., & Thompson, M. E. (2011). Psychometric value of the Center for Epidemiologic Studies Depression (CES-D) scale for screening of depressive symptoms in Armenian population. Journal of Affective Disorders, 133(3), 489498. https://doi.org/10.1016/J.JAD.2011.04.042Google Scholar
Dere, J., Watters, C. A., Yu, S. C.-M., Bagby, R. M., Ryder, A. G., & Harkness, K. L. (2015). Cross-cultural examination of measurement invariance of the Beck Depression Inventory–II. Psychological Assessment, 27(1), 6881. https://doi.org/10.1037/pas0000026Google Scholar
Devine, J., Fliege, H., Kocalevent, R., Mierke, A., Klapp, B. F., & Rose, M. (2016). Evaluation of Computerized Adaptive Tests (CATs) for longitudinal monitoring of depression, anxiety, and stress reactions. Journal of Affective Disorders, 190, 846853. https://doi.org/10.1016/j.jad.2014.10.063Google Scholar
DeWalt, D. A., Rothrock, N., Yount, S., Stone, A. A., & PROMIS Cooperative Group. (2007). Evaluation of item candidates: the PROMIS qualitative item review. Medical Care, 45(5 Suppl. 1), S12S21. https://doi.org/10.1097/01.mlr.0000254567.79743.e2Google Scholar
Dorans, N. J. (2007). Linking scores from multiple health outcome instruments. Quality of Life Research, 16(S1), 8594. https://doi.org/10.1007/s11136-006–9155–3Google Scholar
Dowling, N. M., Bolt, D. M., Deng, S., & Li, C. (2016). Measurement and control of bias in patient reported outcomes using multidimensional item response theory. BMC Medical Research Methodology, 16(1), 63. https://doi.org/10.1186/s12874-016–0161–zGoogle Scholar
Dozois, D. J. A., Dobson, K. S., & Ahnberg, J. L. (1998). A psychometric evaluation of the Beck Depression Inventory-II. Psychological Assessment, 10(2), 8389. https://doi.org/10.1037/1040–3590.10.2.83Google Scholar
Eaton, W. W., Smith, C., Ybarra, M., Muntaner, C., & Tien, A. (2004). Center for Epidemiologic Studies Depression Scale: Review and revision (CESD and CESD-R). In Maruish, M. E. (Ed.), The use of psychological testing for treatment planning and outcomes assessment: Instruments for adults (pp. 363377). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
Eisen, S. V., Schultz, M. R., Ni, P., Haley, S. M., Smith, E. G., Spiro, A., … Jette, A. M. (2016). Development and validation of a computerized-adaptive test for PTSD (P-CAT). Psychiatric Services, 67(10), 11161123. https://doi.org/10.1176/appi.ps.201500382Google Scholar
El-Den, S., Chen, T. F., Gan, Y.-L., Wong, E., & O’Reilly, C. L. (2018). The psychometric properties of depression screening tools in primary healthcare settings: A systematic review. Journal of Affective Disorders, 225, 503522. https://doi.org/10.1016/j.jad.2017.08.060Google Scholar
Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
Flens, G., Smits, N., Carlier, I., van Hemert, A. M., & de Beurs, E. (2016). Simulating computer adaptive testing with the Mood and Anxiety Symptom Questionnaire. Psychological Assessment, 28(8), 953–62. https://doi.org/10.1037/pas0000240Google Scholar
Fliege, H., Becker, J., Walter, O. B., Bjorner, J. B., Klapp, B. F., & Rose, M. (2005). Development of a computer-adaptive test for depression (D-CAT). Quality of Life Research, 14(10), 22772291. https://doi.org/10.1007/s11136-005–6651–9Google Scholar
Forkmann, T., Boecker, M., Norra, C., Eberle, N., Kircher, T., Schauerte, P., … Wirtz, M. (2009). Development of an item bank for the assessment of depression in persons with mental illnesses and physical diseases using Rasch analysis. Rehabilitation Psychology, 54(2), 186197. https://doi.org/10.1037/a0015612Google Scholar
Fresco, D. M., Heimberg, R. G., Mennin, D. S., & Turk, C. L. (2002). Confirmatory factor analysis of the Penn State Worry Questionnaire. Behaviour Research and Therapy, 40, 313323.Google Scholar
Fresco, D. M., Mennin, D. S., Heimberg, R. G., & Turk, C. L. (2003). Using the Penn State Worry Questionnaire to identify individuals with generalized anxiety disorder: a receiver operating characteristic analysis. Journal of Behavior Therapy and Experimental Psychiatry, 34(3–4), 283291. https://doi.org/10.1016/J.JBTEP.2003.09.001Google Scholar
Fried, E. I. (2017). The 52 symptoms of major depression: Lack of content overlap among seven common depression scales. Journal of Affective Disorders, 208, 191197. https://doi.org/10.1016/J.JAD.2016.10.019Google Scholar
Fried, E. I., & Nesse, R. M. (2015). Depression sum-scores don’t add up: Why analyzing specific depression symptoms is essential. BMC Medicine, 13(1), 72. https://doi.org/10.1186/s12916-015–0325–4Google Scholar
Fydrich, T., Dowdall, D., & Chambless, D. L. (1992). Reliability and validity of the Beck Anxiety Inventory. Journal of Anxiety Disorders, 6(1), 5561. https://doi.org/10.1016/0887–6185(92)90026–4Google Scholar
Ghassemzadeh, H., Mojtabai, R., Karamghadiri, N., & Ebrahimkhani, N. (2005). Psychometric properties of a Persian-language version of the Beck Depression Inventory – second edition: BDI-II-PERSIAN. Depression and Anxiety, 21(4), 185192. https://doi.org/10.1002/da.20070Google Scholar
Gibbons, C., & Skevington, S. M. (2018). Adjusting for cross-cultural differences in computer-adaptive tests of quality of life. Quality of Life Research,27, 10271039. https://doi.org/10.1007/s11136-017–1738–7Google Scholar
Gibbons, L. E., Feldman, B. J., Crane, H. M., Mugavero, M., Willig, J. H., Patrick, D., … Crane, P. K. (2011). Migrating from a legacy fixed-format measure to CAT administration: Calibrating the PHQ-9 to the PROMIS depression measures. Quality of Life Research, 20(9), 13491357. https://doi.org/10.1007/s11136-011–9882–yGoogle Scholar
Gibbons, R., Bock, R. D., Hedeker, D., Weiss, D. J., Segawa, E., Bhaumik, D. K., … Stover, A. (2007). Full-information item bifactor analysis of graded response data. Applied Psychological Measurement, 31(1), 419. https://doi.org/10.1177/0146621606289485Google Scholar
Gibbons, R., Kupfer, D., Frank, E., Moore, T., Beiser, D. G., & Boudreaux, E. D. (2017). Development of a Computerized Adaptive Test Suicide Scale: The CAT-SS. The Journal of Clinical Psychiatry, 78, 13761382. https://doi.org/10.4088/JCP.16m10922Google Scholar
Gibbons, R., Perraillon, M. C., & Kim, J. B. (2014). Item response theory approaches to harmonization and research synthesis. Health Services and Outcomes Research Methodology, 14(4), 213231. https://doi.org/10.1007/s10742-014–0125–xGoogle Scholar
Gibbons, R., Weiss, D. J., Frank, E., & Kupfer, D. (2016). Computerized adaptive diagnosis and testing of mental health disorders. Annual Review of Clinical Psychology, 12(1), 83104. https://doi.org/10.1146/annurev-clinpsy–021815–093634Google Scholar
Gibbons, R., Weiss, D. J., Kupfer, D. J., Frank, E., Fagiolini, A., Grochocinski, V. J., … Immekus, J. C. (2008). Using computerized adaptive testing to reduce the burden of mental health assessment. Psychiatric Services, 59(4), 361–8. https://doi.org/10.1176/appi.ps.59.4.361Google Scholar
Gibbons, R., Weiss, D. J., Pilkonis, P. A., Frank, E., Moore, T., Kim, J. B., & Kupfer, D. J. (2012). The CAT-DI: Development of a computerized adaptive test for depression. Archives of General Psychiatry, 69(11), 1104–12. https://doi.org/10.1001/archgenpsychiatry.2012.14Google Scholar
Gibbons, R., Weiss, D., Pilkonis, P., Frank, E., Moore, T., Kim, J., & Kupfer, D. (2014). Development of the CAT-ANX: A computerized adaptive test for anxiety. American Journal of Psychiatry, 171(2), 187194. https://doi.org/10.1038/nature13314.AGoogle Scholar
Granillo, M. T. (2012). Structure and function of the Patient Health Questionnaire-9 among Latina and non-Latina white female college students. Journal of the Society for Social Work and Research, 3(2), 8093. https://doi.org/10.5243/jsswr.2012.6Google Scholar
Guo, B., Kaylor-Hughes, C., Garland, A., Nixon, N., Sweeney, T., Simpson, S., … Morriss, R. (2017). Factor structure and longitudinal measurement invariance of PHQ-9 for specialist mental health care patients with persistent major depressive disorder: Exploratory Structural Equation Modelling. Journal of Affective Disorders, 219, 18. https://doi.org/10.1016/j.jad.2017.05.020Google Scholar
Guo, T., Xiang, Y.-T., Xiao, L., Hu, C.-Q., Chiu, H. F. K., Ungvari, G. S., … Wang, G. (2015). Measurement-based care versus standard care for major depression: A randomized controlled trial with blind raters. American Journal of Psychiatry, 172(10), 10041013. https://doi.org/10.1176/appi.ajp.2015.14050652Google Scholar
Hamamura, T., Heine, S. J., & Paulhus, D. L. (2008). Cultural differences in response styles: The role of dialectical thinking. Personality and Individual Differences, 44(4), 932942. https://doi.org/10.1016/j.paid.2007.10.034Google Scholar
Haroz, E. E., Ritchey, M., Bass, J. K., Kohrt, B. A., Augustinavicius, J., Michalopoulos, L., … Bolton, P. (2017). How is depression experienced around the world? A systematic review of qualitative literature. Social Science and Medicine, 183, 151162. https://doi.org/10.1016/j.socscimed.2016.12.030Google Scholar
Hazlett-Stevens, H., Ullman, J. B., & Craske, M. G. (2004). Factor Structure of the Penn State Worry Questionnaire. Assessment, 11(4), 361370. https://doi.org/10.1177/1073191104269872Google Scholar
Hollifield, M., Warner, T. D., Lian, N., Krakow, B., Jenkins, J. H., Kesler, J., … Westermeyer, J. (2002). Measuring trauma and health status in refugees. JAMA, 288(5), 611. https://doi.org/10.1001/jama.288.5.611Google Scholar
Huang, F. Y., Chung, H., Kroenke, K., Delucchi, K. L., & Spitzer, R. L. (2006). Using the patient health questionnaire-9 to measure depression among racially and ethnically diverse primary care patients. Journal of General Internal Medicine, 21(6), 547552. https://doi.org/10.1111/j.1525–1497.2006.00409.xGoogle Scholar
Iwata, N., & Buka, S. (2002). Race/ethnicity and depressive symptoms: A cross-cultural/ethnic comparison among university students in East Asia, North and South America. Social Science & Medicine, 55(12), 22432252. https://doi.org/10.1016/S0277–9536(02)00003–5Google Scholar
Jin, K.-Y., & Wang, W.-C. (2014). Generalized IRT models for extreme response style. Educational and Psychological Measurement, 74(1), 116138. https://doi.org/10.1177/0013164413498876Google Scholar
Kabacoff, R. I., Segal, D. L., Hersen, M., & Van Hasselt, V. B. (1997). Psychometric properties and diagnostic utility of the Beck Anxiety Inventory and the state-trait anxiety inventory with older adult psychiatric outpatients. Journal of Anxiety Disorders, 11(1), 3347. https://doi.org/10.1016/S0887–6185(96)00033–3Google Scholar
Knappe, S., Klotsche, J., Heyde, F., Hiob, S., Siegert, J., Hoyer, J., … Beesdo-Baum, K. (2014). Test–retest reliability and sensitivity to change of the dimensional anxiety scales for DSM-5. CNS Spectrums, 19(3), 256267. https://doi.org/10.1017/S1092852913000710Google Scholar
Kojima, M., Furukawa, T. A., Takahashi, H., Kawai, M., Nagaya, T., & Tokudome, S. (2002). Cross-cultural validation of the Beck Depression Inventory-II in Japan. Psychiatry Research, 110(3), 291299. https://doi.org/10.1016/S0165–1781(02)00106–3Google Scholar
Kosinski, M., Matz, S. C., Gosling, S. D., Popov, V., & Stillwell, D. (2015). Facebook as a research tool for the social sciences: Opportunities, challenges, ethical considerations, and practical guidelines. American Psychologist, 70(6), 543556. https://doi.org/10.1037/a0039210Google Scholar
Kotov, R., Krueger, R. F., Watson, D., Achenbach, T. M., Althoff, R. R., Bagby, R. M., … Zimmerman, M. (2017). The Hierarchical Taxonomy of Psychopathology (HiTOP): A dimensional alternative to traditional nosologies. Journal of Abnormal Psychology, 126(4), 454477. https://doi.org/10.1037/abn0000258Google Scholar
Kroenke, K., Spitzer, R. L., & Williams, J. B. W. (2001). The PHQ-9: Validity of a brief depression severity measure. Journal of General Internal Medicine, 16(9), 606613.Google Scholar
Krueger, R. F., Markon, K. E., Patrick, C. J., Benning, S. D., & Kramer, M. D. (2007). Linking antisocial behaviour, substance use, and personality: An integrative quantitative model of the adult externalizing spectrum. Journal of Abnormal Psychology, 116(4), 645666. https://doi.org/10.1037/0021-843X.116.4.645Google Scholar
Lai, J.-S., Cella, D., Choi, S., Junghaenel, D. U., Christodoulou, C., Gershon, R., & Stone, A. (2011). How item banks and their application can influence measurement practice in rehabilitation medicine: A PROMIS fatigue item bank example. Archives of Physical Medicine and Rehabilitation, 92(10), S20S27. https://doi.org/10.1016/j.apmr.2010.08.033Google Scholar
Latimer, S., Meade, T., & Tennant, A. (2014). Development of item bank to measure deliberate self-harm behaviours: Facilitating tailored scales and computer adaptive testing for specific research and clinical purposes. Psychiatry Research, 217(3), 240247. https://doi.org/10.1016/j.psychres.2014.03.015Google Scholar
LeBeau, R. T., Mesri, B., & Craske, M. G. (2016). The DSM-5 social anxiety disorder severity scale: Evidence of validity and reliability in a clinical sample. Psychiatry Research, 244, 9496. https://doi.org/10.1016/j.psychres.2016.07.024Google Scholar
Lee, J. J., Kim, K. W., Kim, T. H., Park, J. H., Lee, S. B., Park, J. W., … Steffens, D. C. (2011). Cross-cultural considerations in administering the Center for Epidemiologic Studies Depression Scale. Gerontology, 57(5), 455–61. https://doi.org/10.1159/000318030Google Scholar
Liegl, G., Wahl, I., Berghofer, A., Nolte, S., Pieh, C., Rose, M., & Fischer, F. (2016). Using Patient Health Questionnaire-9 item parameters of a common metric resulted in similar depression scores compared to independent item response theory model reestimation. Journal of Clinical Epidemiology, 71, 2534. https://doi.org/10.1016/j.jclinepi.2015.10.006Google Scholar
Löwe, B., Decker, O., Müller, S., Brähler, E., Schellberg, D., Herzog, W., & Herzberg, P. Y. (2008). Validation and standardization of the Generalized Anxiety Disorder Screener (GAD-7) in the general population. Medical Care, 46(3), 266274. https://doi.org/10.1097/MLR.0b013e318160d093Google Scholar
Mahoney, A., Hobbs, M. J., Newby, J. M., Williams, A. D., & Andrews, G. (2018). Psychometric properties of the Worry Behaviors Inventory: Replication and extension in a large clinical and community sample. Behavioural and Cognitive Psychotherapy, 46(1), 84100. https://doi.org/10.1017/S1352465817000455Google Scholar
Mahoney, A., Hobbs, M. J., Newby, J. M., Williams, A. D., Sunderland, M., & Andrews, G. (2016). The Worry Behaviors Inventory: Assessing the behavioral avoidance associated with generalized anxiety disorder. Journal of Affective Disorders, 203, 256264. https://doi.org/10.1016/j.jad.2016.06.020Google Scholar
Manea, L., Gilbody, S., & McMillan, D. (2012). Optimal cut-off score for diagnosing depression with the Patient Health Questionnaire (PHQ-9): A meta-analysis. CMAJ, 184(3), E191-6. https://doi.org/10.1503/cmaj.110829Google Scholar
McElroy, E., Casey, P., Adamson, G., Filippopoulos, P., & Shevlin, M. (2018). A comprehensive analysis of the factor structure of the Beck Depression Inventory-II in a sample of outpatients with adjustment disorder and depressive episode. Irish Journal of Psychological Medicine, 35, 5361. https://doi.org/10.1017/ipm.2017.52Google Scholar
McGlinchey, J. B., Zimmerman, M., Young, D., & Chelminski, I. (2006). Diagnosing major depressive disorder VIII: Are some symptoms better than others? The Journal of Nervous and Mental Disease, 194(10), 785790. https://doi.org/10.1097/01.nmd.0000240222.75201.aaGoogle Scholar
Meyer, T. J., Miller, M. L., Metzger, R. L., & Borkovec, T. D. (1990). Development and validation of the Penn State Worry Questionnaire. Behaviour Research and Therapy, 28, 487495.Google Scholar
Möller, E. L., & Bögels, S. M. (2016). The DSM-5 Dimensional Anxiety Scales in a Dutch non-clinical sample: Psychometric properties including the adult separation anxiety disorder scale. International Journal of Methods in Psychiatric Research, 25(3), 232239. https://doi.org/10.1002/mpr.1515Google Scholar
Moriarty, A. S., Gilbody, S., McMillan, D., & Manea, L. (2015). Screening and case finding for major depressive disorder using the Patient Health Questionnaire (PHQ-9): A meta-analysis. General Hospital Psychiatry, 37(6), 567576. https://doi.org/10.1016/j.genhosppsych.2015.06.012Google Scholar
Morin, C. M., Landreville, P., Colecchi, C., McDonald, K., Stone, J., & Ling, W. (1999). The Beck Anxiety Inventory: Psychometric properties with older adults. Journal of Clinical Geropsychology, 5(1), 1929. https://doi.org/10.1023/A:1022986728576Google Scholar
Muntingh, A. D. T., van der Feltz-Cornelis, C. M., van Marwijk, H. W. J., Spinhoven, P., Penninx, B. W. J. H., & van Balkom, A. J. L. M. (2011). Is the Beck Anxiety Inventory a good tool to assess the severity of anxiety? A primary care study in the Netherlands Study of Depression and Anxiety (NESDA). BMC Family Practice, 12, 66. https://doi.org/10.1186/1471–2296–12–66Google Scholar
Muraki, E. (1992). A generalized partial credit model: Application of an EM algorithm. Applied Psychological Measurement, 16(2), 159176.Google Scholar
Nuevo, R., Dunn, G., Dowrick, C., Vázquez-Barquero, J. L., Casey, P., Dalgard, O. S., … Ayuso-Mateos, J. L. (2009). Cross-cultural equivalence of the Beck Depression Inventory: A five-country analysis from the ODIN study. Journal of Affective Disorders, 114(1–3), 156162. https://doi.org/10.1016/J.JAD.2008.06.021Google Scholar
Osman, A., Kopper, B. A., Barrios, F. X., Osman, J. R., & Wade, T. (1997). The Beck Anxiety Inventory: Reexamination of factor structure and psychometric properties. Journal of Clinical Psychology, 53(1), 714.Google Scholar
Overduin, M. K., & Furnham, A. (2012). Assessing obsessive-compulsive disorder (OCD): A review of self-report measures. Journal of Obsessive-Compulsive and Related Disorders, 1(4), 312324. https://doi.org/10.1016/j.jocrd.2012.08.001Google Scholar
Parkerson, H. A., Thibodeau, M. A., Brandt, C. P., Zvolensky, M. J., & Asmundson, G. J. G. (2015). Cultural-based biases of the GAD-7. Journal of Anxiety Disorders, 31, 3842. https://doi.org/10.1016/J.JANXDIS.2015.01.005Google Scholar
Paulhus, D. L. (2002). Socially desirable responding: the evolution of a construct. In Braun, H. I., Jackson, D. N., & Wiley, D. E. (Eds.), The role of constructs in psychological and education measurement (pp. 4969). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
Paulhus, D. L., & Vazire, S. (2007). The self-report method. In Robins, R. W., Fraley, C., & Krueger, R. F. (Eds.), Handbook of research methods in personality psychology (pp. 224239). New York: Guilford Press.Google Scholar
Pedrelli, P., Blais, M. A., Alpert, J. E., Shelton, R. C., Walker, R. S. W., & Fava, M. (2014). Reliability and validity of the Symptoms of Depression Questionnaire (SDQ). CNS Spectrums, 19(6), 535–46. https://doi.org/10.1017/S1092852914000406Google Scholar
Pettersson, A., Boström, K. B., Gustavsson, P., & Ekselius, L. (2015). Which instruments to support diagnosis of depression have sufficient accuracy? A systematic review. Nordic Journal of Psychiatry, 69(7), 497508. https://doi.org/10.3109/08039488.2015.1008568Google Scholar
Pilkonis, P. A., Choi, S. W., Reise, S. P., Stover, A. M., Riley, W. T., Cella, D., & Group, P. C. (2011). Item banks for measuring emotional distress from the Patient-Reported Outcomes Measurement Information System (PROMIS®): Depression, anxiety, and anger. Assessment, 18(3), 263283. https://doi.org/10.1177/1073191111411667Google Scholar
Plieninger, H. (2017). Mountain or molehill? A simulation study on the impact of response styles. Educational and Psychological Measurement, 77(1), 3253. https://doi.org/10.1177/0013164416636655Google Scholar
Posner, S. F., Stewart, A. L., Marín, G., & Pérez-Stable, J., E. (2001). Factor variability of the Center for Epidemiological Studies Depression Scale (CES-D) among urban Latinos. Ethnicity and Health, 6(2), 137144. https://doi.org/10.1080/13557850120068469Google Scholar
Radloff, L. S. (1977). The CES-D Scale: A self-report depression scale for research in the general population. Applied Psychological Measurement, 1, 385401.Google Scholar
Richardson, E. J., & Richards, J. S. (2008). Factor structure of the PHQ-9 screen for depression across time since injury among persons with spinal cord injury. Rehabilitation Psychology, 53(2), 243249. https://doi.org/10.1037/0090–5550.53.2.243Google Scholar
Richardson, L. P., McCauley, E., Grossman, D. C., McCarty, C. A., Richards, J., Russo, J. E., … Katon, W. (2010). Evaluation of the Patient Health Questionnaire-9 item for detecting major depression among adolescents. Pediatrics, 126(6), 1117–23. https://doi.org/10.1542/peds.2010–0852Google Scholar
Rose, M., Bjorner, J. B., Fischer, F., Anatchkova, M., Gandek, B., Klapp, B. F., & Ware, J. E. (2012). Computerized adaptive testing: Ready for ambulatory monitoring? Psychosomatic Medicine, 74(4), 338348. https://doi.org/10.1097/PSY.0b013e3182547392Google Scholar
Samejima, F. (1997). Graded response model. In Handbook of modern item response theory (pp. 85100). New York: Springer. https://doi.org/10.1007/978–1-4757–2691-6_5Google Scholar
Santor, D. A., & Coyne, J. C. (1997). Shortening the CES-D to improve its ability to detect cases of depression. Psychological Assessment, 9(3), 233243. https://doi.org/10.1037/1040–3590.9.3.233Google Scholar
Schalet, B. D., Cook, K. F., Choi, S. W., & Cella, D. (2014). Establishing a common metric for self-reported anxiety: Linking the MASQ, PANAS, and GAD-7 to PROMIS Anxiety. Journal of Anxiety Disorders, 28(1), 8896. https://doi.org/10.1016/j.janxdis.2013.11.006Google Scholar
Scott, K., & Lewis, C. C. (2015). Using measurement-based care to enhance any treatment. Cognitive and Behavioral Practice, 22(1), 4959. https://doi.org/10.1016/j.cbpra.2014.01.010Google Scholar
Sijbrandij, M., Reitsma, J. B., Roberts, N. P., Engelhard, I. M., Olff, M., Sonneveld, L. P., & Bisson, J. I. (2013). Self-report screening instruments for post-traumatic stress disorder (PTSD) in survivors of traumatic experiences. In Sijbrandij, M. (Ed.), Cochrane database of systematic reviews. Chichester: John Wiley & Sons. https://doi.org/10.1002/14651858.CD010575Google Scholar
Smits, N., Cuijpers, P., & van Straten, A. (2011). Applying computerized adaptive testing to the CES-D scale: A simulation study. Psychiatry Research, 188(1), 147155. https://doi.org/10.1016/j.psychres.2010.12.001Google Scholar
Spielberger, C. D., Gorsuch, R. L., & Lushene, R. E. (1970). Manual for the State-Trait Anxiety Inventory. Palo Alto, CA: Consulting Psychologists Press.Google Scholar
Spitzer, R. L., Kroenke, K., Williams, J. B. W., & Lowe, B. (2006). A brief measure for assessing generalized anxiety disorder: The GAD-7. Archives of Internal Medicine, 166, 10921097.Google Scholar
Streiner, D. L., & Norman, G. R. (2008). Biases in responding. In Streiner, D. L. & Norman, G. R. (Eds.), Health Measurement Scales: A practical guide to their development and use. Oxford: Oxford University Press.Google Scholar
Subica, A. M., Fowler, J. C., Elhai, J. D., Frueh, B. C., Sharp, C., Kelly, E. L., & Allen, J. G. (2014). Factor structure and diagnostic validity of the Beck Depression Inventory–II with adult clinical inpatients: Comparison to a gold-standard diagnostic interview. Psychological Assessment, 26(4), 11061115. https://doi.org/10.1037/a0036998Google Scholar
Sunderland, M., Batterham, P. J., Calear, A. L., & Carragher, N. (2017). The development and validation of static and adaptive screeners to measure the severity of panic disorder, social anxiety disorder, and obsessive compulsive disorder. International Journal of Methods in Psychiatric Research, 26(4), e1561. https://doi.org/10.1002/mpr.1561Google Scholar
Sunderland, M., Slade, T., Krueger, R. F., Markon, K. E., Patrick, C. J., & Kramer, M. D. (2017). Efficiently measuring dimensions of the externalizing spectrum model: Development of the Externalizing Spectrum Inventory-Computerized Adaptive Test (ESI-CAT). Psychological Assessment, 29(7), 868880. https://doi.org/10.1037/pas0000384Google Scholar
Takayanagi, Y., Spira, A. P., Roth, K. B., Gallo, J. J., Eaton, W. W., & Mojtabai, R. (2014). Accuracy of reports of lifetime mental and physical disorders: Results from the Baltimore Epidemiological Catchment Area study. JAMA Psychiatry, 71(3), 273–80. https://doi.org/10.1001/jamapsychiatry.2013.3579Google Scholar
Thornton, L., Batterham, P. J., Fassnacht, D. B., Kay-Lambkin, F., Calear, A. L., & Hunt, S. (2016). Recruiting for health, medical or psychosocial research using Facebook: Systematic review. Internet Interventions, 4(1), 7281. https://doi.org/10.1016/j.invent.2016.02.001Google Scholar
Uher, R., Perlis, R. H., Placentino, A., Dernovšek, M. Z., Henigsberg, N., Mors, O., … Farmer, A. (2012). Self-report and clinician-rated measures of depression severity: Can one replace the other? Depression and Anxiety, 29(12), 1043–9. https://doi.org/10.1002/da.21993Google Scholar
van Ballegooijen, W., Riper, H., Cuijpers, P., van Oppen, P., & Smit, J. H. (2016). Validation of online psychometric instruments for common mental health disorders: A systematic review. BMC Psychiatry, 16(1), 45. https://doi.org/10.1186/s12888-016–0735–7Google Scholar
Vaughn-Coaxum, R. A., Mair, P., & Weisz, J. R. (2016). Racial/ethnic differences in youth depression indicators. Clinical Psychological Science, 4(2), 239253. https://doi.org/10.1177/2167702615591768Google Scholar
Venables, N. C., Yancey, J. R., Kramer, M. D., Hicks, B. M., Krueger, R. F., Iacono, W. G., … Patrick, C. J. (2018). Psychoneurometric assessment of dispositional liabilities for suicidal behavior: Phenotypic and etiological associations. Psychological Medicine, 48(3), 463472. https://doi.org/10.1017/S0033291717001830Google Scholar
Vigneau, F., & Cormier, S. (2008). The factor structure of the State-Trait Anxiety Inventory: An alternative view. Journal of Personality Assessment, 90(3), 280285. https://doi.org/10.1080/00223890701885027Google Scholar
Wahl, I., Lowe, B., Bjorner, J. B., Fischer, F., Langs, G., Voderholzer, U., … Rose, M. (2014). Standardization of depression measurement: A common metric was developed for 11 self-report depression measures. Journal of Clinical Epidemiology, 67(1), 7386. https://doi.org/10.1016/j.jclinepi.2013.04.019Google Scholar
Walter, O. B., Becker, J., Bjorner, J. B., Fliege, H., Klapp, B. F., & Rose, M. (2007). Development and evaluation of a computer adaptive test for “Anxiety” (Anxiety-CAT). Quality of Life Research, 16, 143155. https://doi.org/10.1007/s11136-007–9191–7Google Scholar
Wang, Y.-P., Gorenstein, C., Wang, Y.-P., & Gorenstein, C. (2013). Psychometric properties of the Beck Depression Inventory-II: A comprehensive review. Revista Brasileira de Psiquiatria, 35(4), 416431. https://doi.org/10.1590/1516–4446–2012–1048Google Scholar
Wiebe, J. S., & Penley, J. A. (2005). A psychometric comparison of the Beck Depression Inventory-II in English and Spanish. Psychological Assessment, 17(4), 481485. https://doi.org/10.1037/1040–3590.17.4.481Google Scholar
Wong, Q. J. J., Gregory, B., & McLellan, L. F. (2016). A review of scales to measure social anxiety disorder in clinical and epidemiological studies. Current Psychiatry Reports, 18(4), 38. https://doi.org/10.1007/s11920-016–0677–2Google Scholar
Yancey, J. R., Venables, N. C., & Patrick, C. J. (2016). Psychoneurometric operationalization of threat sensitivity: Relations with clinical symptom and physiological response criteria. Psychophysiology, 53(3), 393405. https://doi.org/10.1111/psyp.12512Google Scholar
Zimmerman, M., Martinez, J. H., Friedman, M., Boerescu, D. A., Attiullah, N., & Toba, C. (2012). How can we use depression severity to guide treatment selection when measures of depression categorize patients differently? The Journal of Clinical Psychiatry, 73(10), 12871291. https://doi.org/10.4088/JCP.12m07775Google Scholar
Zimmerman, M., Walsh, E., Friedman, M., Boerescu, D. A., & Attiullah, N. (2018). Are self-report scales as effective as clinician rating scales in measuring treatment response in routine clinical practice? Journal of Affective Disorders, 225, 449452. https://doi.org/10.1016/j.jad.2017.08.024Google Scholar

References

Batty, G. D., Deary, I. J., Schoon, I., & Gale, C. R. (2007). Mental ability across childhood in relation to risk factors for premature mortality in adult life: The 1970 British Cohort Study. Journal of Epidemiology and Community Health, 61, 9971003. doi:10.1136/jech.2006.054494Google Scholar
Beaudoin, M., & Desrichard, O. (2011). Are memory self-efficacy and memory performance related? A meta-analysis. Psychological Bulletin, 137, 211241. doi:10.1037/a0022106Google Scholar
Bornstein, R. F. (1999). Criterion validity of objective and projective dependency tests: A meta-analytic assessment of behavioral prediction. Psychological Assessment, 11, 4857. doi:10.1037/1040-3590.11.1.48Google Scholar
Buchheim, A., Erk, S., George, C., Kächele, H., Martius, P., Pokorny, D., … Walter, H. (2016). Neural response during the activation of the attachment system in patients with borderline personality disorder: An fMRI study. Frontiers in Human Neuroscience, 10, 389.Google Scholar
Cohn, L. D., & Westenberg, P. M. (2004). Intelligence and maturity: Meta-analytic evidence for the incremental and discriminant validity of Loevinger’s measure of ego development. Journal of Personality and Social Psychology, 86, 760772. doi:10.1037/0022-3514.86.5.760Google Scholar
Cramer, P. (2015). Defense mechanisms: 40 years of empirical research. Journal of Personality Assessment, 97, 114123. doi:10.1080/00223891.2014.947997Google Scholar
Crisi, A. (2018). The Crisi Wartegg System (CWS): Manual for administration, scoring, and interpretation. New York: Routledge.Google Scholar
Cronbach, L. J. (1990). Essentials of psychological testing (5th ed.). New York: Harper Collins.Google Scholar
Dean, K. L., Viglione, D. J., Perry, W., & Meyer, G. J. (2007). A method to optimize the response range while maintaining Rorschach Comprehensive System validity. Journal of Personality Assessment, 89, 149161. doi:10.1080/00223890701468543.Google Scholar
Diener, M. J., Hilsenroth, M. J., Shaffer, S. A., & Sexton, J. E. (2011). A meta‐analysis of the relationship between the Rorschach Ego Impairment Index (EII) and psychiatric severity. Clinical Psychology and Psychotherapy, 18, 464485. doi:10.1002/cpp.725Google Scholar
Exner, J. E. (1974). The Rorschach: A Comprehensive System, Vol. 1: Basic foundations. New York: Wiley.Google Scholar
Exner, J. E. (1996). Critical bits and the Rorschach response process. Journal of Personality Assessment, 67, 464477.Google Scholar
Exner, J. E. (1997). Rorschach workshops and the future. 1997 Alumni Newsletter. Rorschach Workshops, Asheville, NC, July 7.Google Scholar
Exner, J. E. (2003). The Rorschach: A comprehensive system, Vol. 1: Basic foundations (4th ed.). Hoboken, NJ: Wiley.Google Scholar
Fantini, F., Banis, A., Dell’Acqua, E., Durosini, I., & Aschieri, F. (2017) Exploring children’s induced defensiveness to the Tell Me a Story Test (TEMAS). Journal of Personality Assessment, 99, 275285, doi:10.1080/00223891.2016.1261359Google Scholar
Finn, S. E., Fischer, C. T., & Handler, L. (Eds.). (2012). Collaborative/therapeutic assessment: A casebook and guide. Hoboken, NJ: John Wiley.Google Scholar
Garb, H. N. (1999). Call for a moratorium on the use of the Rorschach Inkblot Test in clinical and forensic settings. Assessment, 6, 313317. doi:10.1177/107319119900600402Google Scholar
George, C., & West, M. (2012). The Adult Attachment Projective Picture System. New York: Guilford Press.Google Scholar
Graceffo, R. A., Mihura, J. L., & Meyer, G. J. (2014). A meta-analysis of an implicit measure of personality functioning: The Mutuality of Autonomy Scale. Journal of Personality Assessment, 96, 581595. doi:10.1080/00223891.2014.919299Google Scholar
Grønnerød, C. (2003). Temporal stability in the Rorschach method: A meta-analytic review. Journal of Personality Assessment, 80, 272293. doi:10.1207/S15327752JPA8003_06Google Scholar
Grønnerød, J. S., & Grønnerød, C. (2012). The Wartegg Zeichen Test: A literature overview and a meta-analysis of reliability and validity. Psychological Assessment, 24, 476489. doi:10.1037/a0026100Google Scholar
Harris, D. B. (1964). Children’s drawings as measures of intellectual maturity: A revision and extension of the Goodenough Draw-A-Man test. Oxford: Harcourt, Brace & World.Google Scholar
Hathaway, S. R., & McKinley, J. C. (1942). Manual for the Minnesota Multiphasic Personality Inventory. Minneapolis: University of Minnesota Press.Google Scholar
Hosseininasab, A., Meyer, G. J., Viglione, D. J., Mihura, J. L., Berant, E., Resende, A. C., Reese, J., & Mohammadi, M. R. (2017). The effect of CS administration or an R-Optimized alternative on R-PAS Variables: A meta-analysis of findings from six studies. Journal of Personality Assessment, 101(2), 199212. doi:10.1080/00223891.2017.1393430Google Scholar
Imuta, K., Scarf, D., Pharo, H., & Hayne, H. (2013). Drawing a close to the use of human figure drawings as a projective measure of intelligence. PLoS ONE, 8, e58991. doi:10.1371/journal.pone.0058991Google Scholar
Jenkins, S. R. (Ed.). (2008). A handbook of clinical scoring systems for thematic apperceptive techniques. Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
Kanazawa, S. (2012). Intelligence, birth order, and family size. Personality and Social Psychology Bulletin, 38, 11571164. doi:10.1177/0146167212445911Google Scholar
Lilienfeld, S. O., Wood, J. M., & Garb, H. N. (2000). The scientific status of projective techniques. Psychological Science in the Public Interest, 1, 2766. doi:10.1111/1529-1006.002Google Scholar
Loevinger, J. (1998). Technical foundations for measuring ego development: The Washington University sentence completion test. Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
McClelland, D. C., Koestner, R., & Weinberger, J. (1989). How do self-attributed and implicit motives differ? Psychological Review, 96, 690702. doi:10.1037/0033-295X.96.4.690Google Scholar
Meyer, G. J. (1997). On the integration of personality assessment methods: The Rorschach and MMPI. Journal of Personality Assessment, 68, 297330. doi:10.1207/s15327752jpa6802_5.Google Scholar
Meyer, G. J. (Ed.). (2001). Special Section II: The utility of the Rorschach for clinical assessment [Special Section]. Psychological Assessment, 13, 419502.Google Scholar
Meyer, G. J., & Archer, R. P. (2001). The hard science of Rorschach research: What do we know and where do we go? Psychological Assessment, 13, 486502. doi:10.1037/1040-3590.13.4.486Google Scholar
Meyer, G. J., & Erdberg, P. (2018). Using the Rorschach Performance Assessment System (R-PAS) norms with an emphasis on child and adolescent protocols. In Mihura, J. L. & Meyer, G. J. (Eds.). Using the Rorschach Performance Assessment System (R-PAS) (pp. 4661). New York: Guilford Press.Google Scholar
Meyer, G. J., Erdberg, P., & Shaffer, T. W. (2007). Towards international normative reference data for the Comprehensive System. Journal of Personality Assessment, 89, S201S216. doi:10.1080/00223890701629342Google Scholar
Meyer, G. J., Finn, S. E., Eyde, L., Kay, G. G., Moreland, K. L., Dies, R. R. et al. (2001). Psychological testing and psychological assessment: A review of evidence and issues. American Psychologist, 56, 128165. doi:10.1037/0003-066X.56.2.128Google Scholar
Meyer, G. J., Giromini, L., Viglione, D. J., Reese, J. B., & Mihura, J. L. (2015). The association of gender, ethnicity, age, and education with Rorschach scores. Assessment, 22, 4664. doi:10.1177/1073191114544358Google Scholar
Meyer, G. J., Hsiao, W., Viglione, D. J., Mihura, J. L., & Abraham, L. M. (2013). Rorschach scores in applied clinical practice: A survey of perceived validity by experienced clinicians. Journal of Personality Assessment, 95, 351365. doi:10.1080/00223891.2013.770399Google Scholar
Meyer, G. J., Mihura, J. L., & Smith, B. L. (2005). The interclinician reliability of Rorschach interpretation in four data sets. Journal of Personality Assessment, 84, 296314. doi:10.1207/s15327752jpa8403_09Google Scholar
Meyer, G. J., Riethmiller, R. J., Brooks, R. D., Benoit, W. A., & Handler, L. (2000). A replication of Rorschach and MMPI-2 convergent validity. Journal of Personality Assessment, 74, 175215. doi:10.1207/S15327752JPA7402_3Google Scholar
Meyer, G. J., Shaffer, T. W., Erdberg, P., & Horn, S. L. (2015). Addressing issues in the development and use of the Composite International Reference Values as Rorschach norms for adults. Journal of Personality Assessment, 97, 330347. doi:10.1080/00223891.2014.961603Google Scholar
Meyer, G. J., Viglione, D. J., Erdberg, P., Exner, J. E. Jr., & Shaffer, T. (2004). CS scoring differences in the Rorschach Workshop and Fresno nonpatient samples. Paper presented at the annual meeting of the Society for Personality Assessment, Miami, FL, March 11.Google Scholar
Meyer, G. J., Viglione, D. J., & Mihura, J. L. (2017). Psychometric foundations of the Rorschach Performance Assessment System (R-PAS). In Erard, R. & Evans, B. (Eds.), The Rorschach in multimethod forensic practice (pp. 2391). New York: Routledge.Google Scholar
Meyer, G. J., Viglione, D. J., Mihura, J. L., Erard, R. E., & Erdberg, P. (2011). Rorschach Performance Assessment System: Administration, coding, interpretation, and technical Manual. Toledo, OH: Rorschach Performance Assessment System.Google Scholar
Mihura, J. L., Dumitrascu, N., Roy, M., & Meyer, G. J. (2018). The centrality of the response process in construct validity: An illustration via the Rorschach Space response. Journal of Personality Assessment, 100, 233249. doi:10.1080/00223891.2017.1306781Google Scholar
Mihura, J. L., Meyer, G. J., Bombel, G., & Dumitrascu, N. (2015). Standards, accuracy, and questions of bias in Rorschach meta-analyses: Reply to Wood, Garb, Nezworski, Lilienfeld, and Duke (2015). Psychological Bulletin, 141, 250260. doi:10.1037/a0038445Google Scholar
Mihura, J. L., Meyer, G. J., Dumitrascu, N., & Bombel, G. (2013). The validity of individual Rorschach variables: Systematic reviews and meta-analyses of the comprehensive system. Psychological Bulletin, 139, 548605. doi:10.1037/a0029406Google Scholar
Mihura, J. L., Roy, M., & Graceffo, R. A. (2017). Psychological assessment training in clinical psychology doctoral programs. Journal of Personality Assessment, 99, 153164. doi:10.1080/00223891.2016.1201978Google Scholar
Müller, L. E., Bertsch, K., Bülau, K., Herpertz, S. C., & Buchheim, A. (2018). Emotional neglect in childhood shapes social dysfunctioning in adults by influencing the oxytocin and the attachment system: Results from a population-based study. International Journal of Psychophysiology. doi:10.1016/j.ijpsycho.2018.05.011Google Scholar
Murray, H. A. (1943). Thematic Apperception Test manual. Cambridge, MA: Harvard University Press.Google Scholar
Naglieri, J. A. (1988). Draw a Person: A quantitative scoring system. San Antonio, TX: Psychological Corporation.Google Scholar
Naglieri, J. A., McNeish, T., & Bardos, A. (1991). Draw a person: Screening procedure for emotional disturbance. Austin, TX: Pro-Ed.Google Scholar
Parsons, S. (2014). Childhood cognition in the 1970 British Cohort Study. London: Centre for Longitudinal Studies, Institute of Education, University of London.Google Scholar
Picano, J. J., Roland, R. R., Rollins, K. D., & Williams, T. J. (2002). Development and validation of a sentence completion test measure of defensive responding in military personnel assessed for nonroutine missions. Military Psychology, 14(4), 279298. doi:10.1207/S15327876MP1404_4Google Scholar
Pignolo, C., Giromini, L., Ando’, A., Ghirardello, D., Di Girolamo, M., Ales, F., & Zennaro, A. (2017). An interrater reliability study of Rorschach Performance Assessment System (R–PAS) raw and complexity-adjusted scores. Journal of Personality Assessment, 99, 619625. doi:10.1080/00223891.2017.1296844Google Scholar
Reynolds, C. R., & Hickman, J. A. (2004). Draw-A-Person Intellectual Ability Test for children, adolescents, and adults: Examiner’s manual (DAP:IQ). Austin, TX: Pro-Ed.Google Scholar
Roberts, J., & Engel, A. (1974). Family background, early development, and intelligence of children 6–11 years: United States. Vital and Health Statistics Series 11(142). www.cdc.gov/nchs/products/series/series11.htmGoogle Scholar
Rorschach, H. (1942). Psychodiagnostics: A diagnostic test based on perception. Bern: Verlag Hans Huber.Google Scholar
Rotter, J. B., Lah, M. I., & Rafferty, J. E. (1992). Rotter Incomplete Sentences Blank manual (2nd ed.). Orlando, FL: Psychological Corporation.Google Scholar
Scott, L. H. (1981). Measuring intelligence with the Goodenough-Harris Drawing Test. Psychological Bulletin, 89, 483505. doi:10.1037/0033-2909.89.3.483Google Scholar
Searls, D. (2017). The inkblots: Hermann Rorschach, his iconic test, and the power of seeing. New York: Crown Publishers.Google Scholar
Sewell, K. W., & Helle, A. C. (2018). Dissimulation on projective measures: An updated appraisal of a very old question. In Rogers, R. & Bender, S. D. (Eds.), Clinical assessment of malingering and deception (pp. 301313). New York: Guilford Press.Google Scholar
Shaffer, T. W., Erdberg, P., & Meyer, G. J. (Eds.). (2007). International reference samples for the Rorschach Comprehensive System [Special issue]. Journal of Personality Assessment, 89(Suppl. 1).Google Scholar
Sherry, A., Dahlen, E., & Holaday, M. (2004). The use of sentence completion tests with adults. In Hilsenroth, M. J. & Segal, D. L. (Eds.), Comprehensive handbook of psychological assessment, Vol. 2: Personality assessment (pp. 372386). Hoboken, NJ: John Wiley & Sons.Google Scholar
Smith, J. D., Eichler, W. C., Norman, K. R., & Smith, S. R. (2015). The effectiveness of collaborative/therapeutic assessment for psychotherapy consultation: A pragmatic replicated single-case study. Journal of Personality Assessment, 97, 261270. doi:10.1080/00223891.2014.955917Google Scholar
Stein, M. B., & Slavin-Mulford, J. (2018). The Social Cognition and Object Relations Scale-Global Rating Method (SCORS-G): A comprehensive guide for clinicians and researchers. New York: Guilford Press.Google Scholar
Sultan, S., Andronikof, A., Réveillère, C., & Lemmel, G. (2006). A Rorschach stability study in a nonpatient adult sample. Journal of Personality Assessment, 87, 330348. doi:10.1207/s15327752jpa8703_13Google Scholar
Teglasi, H. (2010). Essentials of TAT and other storytelling assessments (2nd ed.). Hoboken, NJ: John Wiley.Google Scholar
Tharinger, D. J., & Roberts, G. (2014). Human figure drawings in therapeutic assessment with children: Process, product, context, and systemic impact. In Handler, L. & Thomas, A. D. (Eds.), Drawings in assessment and psychotherapy: Research and application (pp. 1741). New York: Routledge.Google Scholar
Tharinger, D. J., & Stark, K. D. (1990). A qualitative versus quantitative approach to evaluating the Draw-A-Person and Kinetic Family Drawing: A study of mood- and anxiety-disorder children. Psychological Assessment, 2, 365375. doi:10.1037/1040-3590.2.4.365Google Scholar
Thomas, C. B. (1966). An atlas of figure drawings: Studies on the psychological characteristics of medical students – III. Baltimore, MD: Johns Hopkins Press.Google Scholar
Viglione, D. J., Blume-Marcovici., A. C., Miller, H. L., Giromini, L., & Meyer, G. J. (2012). An initial inter-rater reliability study for the Rorschach Performance Assessment System. Journal of Personality Assessment, 94, 607612. doi:10.1080/00223891.2012.684118Google Scholar
Viglione, D. J., & Hilsenroth, M. J. (2001). The Rorschach: Facts, fiction, and future. Psychological Assessment, 13, 452471. doi:10.1037/1040-3590.13.4.452Google Scholar
Weiner, I. B., & Greene, R. L. (2017). Handbook of personality assessment (2nd ed.). Hoboken, NJ: John Wiley & Sons.Google Scholar
Westenberg, P. M., Hauser, S. T., & Cohn, L. D. (2004). Sentence completion measurement of psychosocial maturity. In Hilsenroth, M. J. & Segal, D. L. (Eds.), Comprehensive handbook of psychological assessment, Vol. 2: Personality assessment (pp. 595616). Hoboken, NJ: John Wiley & Sons.Google Scholar
White, J., & Batty, G. D. (2012). Intelligence across childhood in relation to illegal drug use in adulthood: 1970 British Cohort Study. Journal of Epidemiology and Community Health, 66, 767774. doi:10.1136/jech-2011-200252Google Scholar
Winter, D. G., John, O. P., Stewart, A. J., Klohnen, E. C., & Duncan, L. E. (1998). Traits and motives: Toward an integration of two traditions in personality research. Psychological Review, 105, 230250. doi:10.1037/0033-295X.105.2.230Google Scholar
Wood, J. M., Garb, H. N., Nezworski, M. T., Lilienfeld, S. O., & Duke, M. C. (2015). A second look at the validity of widely used Rorschach indices: Comment on Mihura, Meyer, Dumitrascu, and Bombel (2013). Psychological Bulletin, 141, 236249. doi:10.1037/a0036005Google Scholar
Wood, J. M., Nezworski, M. T., Garb, H. N., & Lilienfeld, S. O. (2001). The misperception of psychopathology: Problems with norms of the Comprehensive System for the Rorschach. Clinical Psychology: Science and Practice, 8, 350373. doi:10.1093/clipsy/8.3.350Google Scholar
Wright, C. V., Beattie, S. G., Galper, D. I., Church, A. S., Bufka, L. F., Brabender, V. M., & Smith, B. L. (2017). Assessment practices of professional psychologists: Results of a national survey. Professional Psychology: Research and Practice, 48, 7378. doi:10.1037/pro0000086Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×