15a

Categories, Constructs, and the Assessment of Personality Pathology: Commentary on Categorical Assessment of Personality Disorders

Robert F. Bornstein

The past decade has been a time of great controversy in the study of personality pathology. Although recent editions of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5; American Psychiatric Association [APA], 2013) and International Classification of Diseases (ICD-10; World Health Organization [WHO], 2004) conceptualized personality disorders (PDs) categorically, both manuals are moving toward a system wherein personality pathology will be described via scores on an array of trait dimensions. Both manuals will likely incorporate a five-dimensional model, although the traits that comprise these dimensions differ in ICD-11 and DSM-5.1. In ICD-11 personality functioning is captured by scores on dissociality, detachment, negative affectivity, anankastia (obsessiveness), and disinhibition (see Tyrer, Reed, & Crawford, 2015).

Based on initial work examining the Alternative Model for Personality Disorders (AMPD) in DSM-5 it appears that DSM-5.1 will employ a slightly different set of core traits: negative affectivity, detachment, disinhibition, antagonism, and psychoticism. In line with the structure of the Five-Factor Model (FFM), the five AMPD trait domains comprise 25 narrower facets (e.g., the domain of disinhibition comprises the facets of irresponsibility, impulsivity, distractibility, risk taking, and rigid perfectionism).

These shifts are substantial, but the concepts that underlie them are not new: there has long been disagreement regarding whether personality pathology is best conceptualized categorically or dimensionally (see Bornstein, 2019, for an historical overview). The tone of this debate has changed, however, becoming more polarized in recent years. Some researchers argue that findings supporting dimensional conceptualizations of personality pathology are so compelling it is time to jettison PD categories altogether and move to a trait-based diagnostic framework (Hopwood et al., 2018; Widiger, Gore, Crego, Rojas, & Oltmanns, 2017). Others maintain that categorical and dimensional PD models both have certain advantages, and that integrative frameworks combining features of both perspectives hold great promise in conceptualizing and diagnosing PDs (Herpertz et al., 2017; Silk, 2015). The dimensional-categorical debate continues, and however the diagnostic manuals evolve it is unlikely that extant PD categories will disappear from the clinical literature anytime soon.

With this ongoing controversy as context, Flory (this volume) provides a sophisticated, balanced, and compelling review of evidence bearing on the reliability and validity of diagnostic interviews for PDs – a contrast to many of the divisive writings that have appeared in this area in recent years. Flory emphasizes three strategies for evaluating the validity of categorical diagnoses assessed via structured interview: (1) comparison of two interviews designed to assess the same PD construct; (2) correlations between interview ratings and patient self-reports; and (3) sensitivity and specificity rates (i.e., proportions of false positives and false negatives) associated with different diagnostic interviews. As Flory notes, each of these validation strategies yields important information, but none can provide definitive evidence that one interview is clearly superior to others. Clinical consensus among several expert raters using Longitudinal Expert All Data (LEAD) procedures is the closest thing to a “gold standard” against which results from PD interviews and questionnaires can be compared (see Aboraya, France, Young, Curci, & LePage, 2005).

Measures, Models, and Conceptual Frameworks

Beyond summarizing evidence regarding the reliability and validity of widely-used interview measures, Flory’s review illuminates a number of broader issues that are central to the evaluation of contemporary PD frameworks, but often go unnoticed. Two broader issues that stand out are: (a) distinguishing categories and constructs from the measures used to quantify them; and (b) distinguishing overarching conceptual frameworks from narrower assessment rubrics.

Distinguishing Categories and Constructs from the Measures Used to Quantify Them

Beginning with the seminal work of Cronbach and Meehl (1955), psychometricians have grappled with the paradoxical reality that it is not possible to disentangle completely the evidence bearing on a psychological construct from the validity of measures used to quantify that construct. Cronbach and Meehl’s cogent observations regarding construct validity apply to a broad array of psychological concepts (see Messick, 1995); in the present context these observations suggest that it is important to distinguish the utility of a diagnostic category (and the construct underlying that category) from the validity of measures that assess that category and its underlying construct. Although the PD interviews reviewed by Flory (this volume) vary with regard to the strength of their reliability and validity evidence, the psychometric soundness (or lack of soundness) of these measures does not allow definitive conclusions to be drawn regarding the conceptual rigor or clinical utility of the PD categories they assess.

Distinguishing Overarching Conceptual Frameworks from Narrower Assessment Rubrics

Flory (this volume) notes that one obstacle to progress in recent years has been an unintended conflation of model and measure: most PD interviews assess constructs derived from the categorical perspective, whereas dimensional PD assessments are typically based on patient self-reports. A different sort of conflation has characterized the ongoing dimensional-categorical debate, with critics on both sides citing flaws in a specific assessment rubric to dismiss the overarching perspective from which that rubric was derived. Thus, many critiques of the categorical perspective on PDs are in fact critiques of particular instantiations of the categorical perspective (typically DSM or ICD categories); flaws in DSM or ICD diagnostic criteria are then used to argue against categorical PD models in toto (see Verheul, 2005). Similarly, limitations in a particular dimensional framework (most often the FFM) are sometimes used to argue against the utility of dimensional PD models in general (Kernberg & Caligor, 2005). Just as it is important to distinguish PD categories and constructs from the measures used to assess them, it is important to distinguish the clinical utility of a specific categorical or dimensional framework (e.g., DSM or FFM) from the categorical and dimensional perspectives, broadly construed.

Beyond Self-Report in the Study of Personality Pathology

People have limited introspective access to their own internal states. Moreover, they are notoriously poor judges of their behavioral predispositions and cannot predict accurately how they will respond in various contexts and settings (see Dunning, Heath, & Suls, 2018; Wilson, 2009). Across a broad array of domains in psychology the correlation (r) between self-report and expressed behavior ranges from about .2 to .3 (Meyer et al., 2001). Similar modest correlations are obtained when self-reports and peer reports of personality pathology are compared (Oltmanns & Turkheimer, 2009). These findings have noteworthy implications for conceptualizing and assessing PDs.

Flory (this volume) correctly notes that meta-analytic findings from O’Connor (2002) have been accepted as evidence that “the latent structure of personality is similar in clinical and non-clinical samples.” Flory’s interpretation of these results is in line with that of most clinicians and clinical researchers. It is important to keep in mind, however, that 34 of the 37 personality and psychopathology measures in O’Connor’s meta-analysis were questionnaires. This illustrates a widespread problem in the contemporary PD literature: researchers often interpret findings regarding self-reported behavior as if these results reflected actual behavior. A more accurate summary of O’Connor’s meta-analytic results might be that the latent structure of self-reported personality is similar in clinical and non-clinical samples. The degree to which parallel patterns would emerge when personality is assessed using other methods is open to question.

People’s unavoidable introspective limitations – limitations which are magnified in many forms of personality pathology – constrain researchers’ ability to validate PD interview data with information derived from questionnaires or other interviews. As a result, a different assessment strategy is needed. A complete understanding of personality pathology – at the level of the individual patient as well as at the broader level of conceptual framework – requires that patient self-reports be complemented with evidence from other sources (e.g., expressed behavior, informant-reports, performance-based test data). When clinical constructs are assessed using multiple methods, results obtained using different methods typically yield divergent results (see Hopwood & Bornstein, 2014, for examples); such multi-method test score divergences are both empirically informative and clinically meaningful (Bornstein, 2011, 2017).

These limitations inherent in patients’ self-reports do not mean that self-report PD data are irrelevant. On the contrary, self-reports are an important aspect of PD assessment, providing crucial perspective regarding how patients with various forms of personality pathology perceive and present themselves. That being said, rigorous validation of PD measures, categories, and diagnostic rubrics requires assessing PD related behavior in vivo, rather than relying exclusively on questionnaire and interview data. Recent studies using ambulatory assessment techniques have employed this approach successfully. For example, Roche and colleagues (Roche, Jacobson, & Pincus, 2016) assessed links between personality impairment and interpersonal functioning in college students over 14 days, finding that changes in behavior were triggered by cognitive and affective dynamics (e.g., negative emotions, cognitive distortions) that would be expected to impact interpersonal functioning as conceptualized by the AMPD. Approaching this issue from a categorical rather than dimensional perspective, Hepp and colleagues (Hepp, Lane, Wycoff, Carpenter, & Trull, 2018) found that across 21 days of behavior sampling, negative interpersonal events (e.g., rejection, conflict) triggered theoretically-related affective responses (e.g., hostility) more strongly in patients with borderline PD than in non-borderline controls.

Along somewhat different lines, to demonstrate convincingly that a personality trait or underlying dynamic plays a role in particular form of personality pathology, researchers must use experimental manipulations to prime key traits, or hypothesized PD dynamics. Following these manipulations researchers can assess the impact of primes on responding. For example, to demonstrate that antagonism plays a causal role in borderline pathology (see APA, 2013, pp. 766–767), lexical primes can be used to activate antagonism in participants who show high versus low levels of borderline features. Exposure to these primes should lead to increased emotional dysregulation and decreased capacity for mentalization in borderline PD participants, but not in participants with low levels of borderline pathology, nor in those with PDs theoretically unrelated to antagonism (e.g., avoidant, histrionic). Similarly, lexical primes can be used to prime helplessness schemas in dependent and nondependent participants, allowing the differential impact of these primes to be assessed. Using this procedure Bornstein and colleagues (Bornstein, Ng, Gallagher, Kloss, & Regier, 2005) demonstrated that a perception of oneself as powerless and ineffectual is central to the dynamics of dependent PD.

Toward an Integrative Perspective on Personality Pathology

The increasing polarization which has characterized the categorical-dimensional PD debate in recent years obscures the fact that these two perspectives have more in common than is usually acknowledged. After all, clinicians cannot render categorical PD diagnoses without first making a series of dimensional symptom ratings (e.g., determining whether a patient’s grandiosity is severe enough to be clinically significant). Similarly, to employ dimensional PD ratings effectively in clinical settings clinicians must employ severity thresholds that distinguish normal from pathological functioning.

In pointing toward future research in this area Flory offers a clear and substantive recommendation to help bridge the gap between the categorical and dimensional perspectives on PDs, proposing that, “Research groups who study ‘discrete’ group-based interview-assessed categories are encouraged to also report dimensional scores for descriptive purposes. These dimensional scores will greatly enhance the ability to compare results across studies and research groups” (p. 362 in the previous chapter). This is an excellent suggestion; in recent years there have been several proposals for integrating categorical and dimensional PD data in ways that accentuate the strengths of each (e.g., Helzer, Kraemer, & Krueger, 2006; Hopwood et al., 2011). The future of PD diagnosis may lie in integrative models which combine dimensional and categorical data to facilitate clinical decision-making and treatment planning.

References

Aboraya, A., France, C., Young, J., Curci, K., & LePage, J. (2005). The validity of psychiatric diagnosis revisited. Psychiatry9, 48–55.

American Psychiatric Association. (2013). Diagnostic and Statistical Manual of Mental Disorders (5th ed.). Arlington, VA: American Psychiatric Publishing.

Bornstein, R. F. (2011). Toward a process-focused model of test score validity: Improving psychological assessment in science and practice. Psychological Assessment23, 532–544.

Bornstein, R. F. (2017). Evidence based psychological assessment. Journal of Personality Assessment99, 435–445.

Bornstein, R. F. (2019). The trait-type dialectic: Construct validity, clinical utility, and the diagnostic process. Personality Disorders: Theory, Research, and Treatment10, 199–209.

Bornstein, R. F., Ng, H. M., Gallagher, H. A., Kloss, D. M., & Regier, N. G. (2005). Contrasting effects of self-schema priming on lexical decisions and Interpersonal Stroop Task performance: Evidence for a cognitive/interactionist model of interpersonal dependency. Journal of Personality73, 731–761.

Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin52, 281–302.

Dunning, D., Heath, C., & Suls, J. M. (2018). Reflections on self-reflection: Contemplating flawed self-judgments in the clinic, classroom, and office cubicle. Perspectives on Psychological Science13, 185–189.

Helzer, J. E., Kraemer, H. C., & Krueger, R. F. (2006). The feasibility and need for dimensional psychiatric diagnoses. Psychological Medicine36, 1671–1680.

Hepp, J., Lane, S. P., Wycoff, A. M., Carpenter, R. W., & Trull, T. J. (2018). Interpersonal stressors and negative affect in individuals with borderline personality disorder and community adults in daily life: A replication and extension. Journal of Abnormal Psychology127, 183–189.

Herpertz, S. C., Huprich, S. K., Bohus, M., Chanen, A., Goodman, M., Mehlum, L., … Sharp, C. (2017). The challenge of transforming the diagnostic system of personality disorders. Journal of Personality Disorders31, 577–589.

Hopwood, C. J., & Bornstein, R. F. (Eds.) (2014). Multimethod Clinical Assessment. New York: Guilford Press.

Hopwood, C. J., Kotov, R., Krueger, R. F., Watson, D., Widiger, T. A., Althoff, R. R., … Zimmermann, J. (2018). The time has come for dimensional personality disorder diagnosis. Personality and Mental Health12, 82–86.

Hopwood, C. J., Malone, J. C, Ansell, E. B., Sanislow, C. A., Grilo, C. M., Pinto, A. … Morey, L. C. (2011). Personality assessment in DSM-5: Empirical support for rating severity, style, and traits. Journal of Personality Disorders25, 305–320.

Kernberg, O. F., & Caligor, E. (2005). A psychoanalytic theory of personality disorders. In J. F. Clarkin & M. F. Lenzenweger (Eds.), Major Theories of Personality Disorder (2nd ed., pp. 114–156). New York: Guilford Press.

Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons’ responses and performances as inquiry into score meaning. American Psychologist50, 741–749.

Meyer, G. J., Finn, S. E., Eyde, L. D., Kay, G. G., Moreland, K. L., Dies, R. R. … Reed, G. M. (2001). Psychological testing and psychological assessment: A review of evidence and issues. American Psychologist56, 128–165.

O’Connor, B. P. (2002). The search for dimensional structure differences between normality and abnormality: A statistical review of published data on personality and psychopathology. Journal of Personality and Social Psychology83, 962–982.

Oltmanns, T. F., & Turkheimer, E. (2009). Person perception and personality pathology. Current Directions in Psychological Science18, 32–36.

Roche, M. J., Jacobson, N. C., & Pincus, A. L. (2016). Using repeated daily assessments to uncover oscillating patterns and temporally-dynamic triggers in structures of psychopathology: Applications to the DSM-5 alternative model of personality disorders. Journal of Abnormal Psychology125, 1090–1102.

Silk, K. R. (2015). The value of retaining personality disorder diagnoses. In S. K. Huprich (Ed.), Personality Disorders: Toward Theoretical and Empirical Integration (pp. 23–41). Washington, DC: American Psychological Association.

Tyrer, P., Reed, G. M., & Crawford, M. J. (2015). Classification, assessment, prevalence, and effect of personality disorder. Lancet385, 717–726.

Verheul, R. (2005). Clinical utility of dimensional models for personality pathology. Journal of Personality Disorders19, 283–302.

Widiger, T. A., Gore, W. L., Crego, C., Rojas, S. L., & Oltmanns, J. R. (2017). Five factor model and personality disorder. In T. A. Widiger (Ed.), The Oxford Handbook of the Five-Factor Model (pp. 449–478). New York: Oxford University Press.

Wilson, T. D. (2009). Know thyself. Perspectives on Psychological Science4, 384–389.

World Health Organization. (2004). International Classification of Diseases (ICD-10). Geneva: WHO.

If you find an error or have any questions, please email us at admin@erenow.org. Thank you!