Secondary Logo

Journal Logo

COLUMNS: LAW AND PSYCHIATRY

Risk Assessment, Prediction, and Foreseeability

REID, WILLIAM H., MD, MPH

Author Information
Journal of Psychiatric Practice®: January 2003 - Volume 9 - Issue 1 - p 82-86

Risk assessment is a hot topic in both clinical and forensic psychiatry. It is usually framed in terms of prediction (e.g., predicting suicide, predicting dangerousness, predicting relapse). This month, in spite of nearly axiomatic views that psychiatrists and other clinicians can’t predict dangerousness or suicide, I’ll briefly discuss some ways in which we can assess risk and some common misunderstandings about our “predictions.”

No, I’m not going to say we know which patients will meet (or cause) tragedy and when they’ll do it. The subtle wording shift in the preceding paragraph—from “predict” to “assess risk”—is the key. Clinician-readers know that we assess risk in many different situations, from admission evaluations to commitment opinions to decisions about patient passes and discharge.

Sometimes, of course, risk assessment is very hard. It is unreasonable to expect a clinician or clinical team to come up with the right answer every time. Sometimes there is no right answer at all. We’ll focus on the idea that risk assessment is logical and can often be done well.

Experienced Clinicians Are Fairly Good Assessors

One commonly hears or reads, sometimes from reputable sources, that psychiatrists and psychologists are poor predictors of risk, and that “studies show” they predict no more accurately than some laypersons. Such statements are at best incomplete and often either misconstrue the concept of risk assessment or are simply not true, depending on the situation and parameters described. The clinical and statistical literature strongly suggests that experienced clinicians are fairly good at assessing risk of suicide and violence. 1,2

Groups at Risk vs. Predicting Individual Behavior

Clinical risk assessment does not imply predicting specific acts or outcomes—that’s a different, much harder, task—but rather it involves trying to place the patient with an appropriate group that has greater or lesser total risk. The risk group may be broad (such as people with major depressive disorder) or narrow (such as people at imminent risk of suicide). Risk assessment (as contrasted with prediction) is not as good as having a crystal ball for individual patient behavior, but it is very useful.

A man with severe delusional disorder had been hospitalized against his will for several years, based on two episodes of assault before admission, severe delusions of persecution, mood instability and irritability, and repeated written threats to kill a number of people. Although he had not committed a violent act in the hospital for many months, his psychiatric symptoms had not changed significantly during his stay. He refused treatment, but was not sufficiently ill to have his refusal overridden by an order to force medications (which might not have been effective in any event).

At his commitment hearing, held before a jury, his attorney asked the forensic experts testifying for the State whether or not they could really “predict” that he would harm others as a result of his mental disorder, much less do so at some particular time in the near future. They answered that they had not attempted to predict any particular act of violence, but rather had come to the opinion that he was part of a group of patients who, as a group, could be expected to be associated with far more violence than the general population, and substantially more than most other patients.

The question before the doctors—and the jury—was not whether or not the patient was clearly and convincingly*going to commit some specific harm as a result of his illness, but whether or not he was accurately viewed as part of a group of people whose members were at markedly increased risk for such behavior. In his case, factors such as his past behavior and threats, their seriousness, his diagnosis, his refusal of treatment, the absence of symptom change, the high probability that his disorder and symptoms would not change within the foreseeable future, and the likelihood that he would not participate in outpatient treatment or monitoring all mitigated toward considering him sufficiently dangerous to remain in the hospital. The jury voted to continue his commitment.

Doctors place people in risk groups all the time, and we do a pretty good job so long as we choose the breadth of the risk category wisely. Internists do it based on things like cholesterol and blood pressure and routinely use those assessments to recommend or provide care and treatment. Some risks suggest simple, general action (e.g., notifying the patient of abnormal lipid levels and making recommendations). Others, such as acute chest pain with certain ECG changes, mandate rapid, often intrusive intervention. One doesn’t expect a physician to predict a specific stroke or myocardial infarction, but a doctor who offers nothing to a patient with obvious risk is likely to be practicing outside the standard of care.

The parallels with psychiatric and psychological practice are obvious. If a prematurely-discharged suicidal patient kills himself on the way home from the hospital, the words “we can’t predict suicide” ring hollow.

The “No-Better-Than-a-Coin-Toss” Fallacy

A number of older papers and reports assert that psychiatrists and psychologists are no better than laypeople, or even the toss of a coin, at predicting suicide or other violence. 3 The reports often say something like “Only 48% of the people predicted to be violent actually assaulted anyone in the year following discharge.” Of course, such a finding does not really imply that the doctors did not do better than chance, but it makes a good “sound bite.” Many people, including a few clinicians, take it to mean that we might as well flip coins as we deal with danger or suicide and patients.

Here’s the fallacy: Let’s say one assesses 1,000 random patients for discharge. Most, by far, will not be significantly violent during a given period (say, 1 year). If each is viewed separately, it is indeed futile to try to “predict dangerousness” for every person in the group. But narrowing the field would be very useful.

Among other things, we can:

  • fairly reliably classify patients by general diagnosis and severity of illness (e.g., presence of psychosis, delusions of persecution, severe depression, or unstable mood)
  • associate some of those disorders with traits of concern, such as instability, treatment refractoriness, unpredictability, or particular behaviors
  • search their past histories for evidence of significant treatment or violence
  • consider our—and/or the treatment team’s—personal experience with each patient
  • assess patients’ responsiveness to treatment
  • assess psychological factors that are sometimes associated with self-harm, aggression, impulsiveness, assaultiveness, and the like
  • evaluate patients’ post-discharge living conditions and circumstances, including presence and availability of family or other caregivers
  • estimate many patients’ responses to, or behaviors in, important postdischarge situations (such as intoxication or marital disputes)
  • estimate whether or not patients are likely to participate in follow-up care, and whether or not intensive monitoring or care (such as by an ACT team) is available
  • consider (but not overuse) factors such as age and gender.

When experienced clinicians carry out these assessments and considerations, with adequate information, it becomes fairly straightforward to match the patient with an appropriate risk group. The narrower the group, the more useful it is for making decisions about individual patients.

Note that we have not predicted suicide or violence, but we have done something almost as useful:We have highlighted a group that deserves important attention. When clinicians can do that, but neglect to do so without good reason, they are likely to be practicing outside the standard of care.

But what about that pesky “coin toss”? Only half the patients named actually went on to commit mayhem or suicide.

Look at it this way: It is good practice indeed when one can move at-risk patients from a large, heterogenous group (for which the chance of mayhem or suicide is, say, 5%, and where the risks are obscured by the “noise” of extraneous information and larger numbers) to a much smaller but more worrisome group in which the chance of such problems is, say, 48%. Once that is done, treatment and protective options become much clearer than before.

There is one more statistical point related to the fallacy. As Norko and Baranoski pointed out in a recent paper, 4once someone has been placed in a high-risk group, his or her risk usually decreases. Suicide and other tragedies are often prevented by the closer monitoring, more intensive treatment, and greater attention such patients and groups receive. Many patients who appear to have been “false positives” would have experienced bad outcomes had they not been recognized.

“Risk Factor” Checklists and Actuarial Instruments

I am concerned about the overuse, and sometimes inappropriate use, of checklists and actuarial instruments to try to predict occurrences like suicide, violence, or criminal recidivism. One should avoid relying completely on them and understand their shortcomings.

Checklists concern me because their positive attributes (such as making sure that staff think about risk and certain items correlated with it) sometimes fail to overcome their negative ones. To the extent that they are used as reminders or memory joggers for experienced evaluators, they’re fine. It may also be good for less experienced screeners to keep checklists handy for reporting information to their supervisors. It is usually foolish, however, to make a simple checklist one’s sole basis for important decisions about safety and therapeutics. Using such checklists as a hospital’s or clinic’s primary means of documenting suicide or other risk assessment may soothe some hospital attorneys, but these lists are not sufficient in and of themselves.

Why the rant about checklists? Isn’t it true that they are designed to be part of a balanced system of care and protection?

Sure, just as it’s true that Cap’n Crunch cereal can be part of a balanced breakfast. But if that’s all we feed the kids (and it’s tempting for busy moms), we’re asking for trouble.

Here are several potential shortcomings.

  • Accepting a false sense of security. Too many care staff and treatment teams (and a few psychiatrists) believe a low checklist score means they don’t have to worry. The checklist is not a substitute for clinical thought, review, interaction, and corroboration.
  • Using undertrained monitoring staff. Facilities and treatment teams may assume that the checklist can be administered by relatively unsophisticated employees, and that the written items decrease or eliminate the need for assessments by more senior staff (a poor way to cut staffing budgets).
  • Confusing clarity with validity. Checklists usually have well-defined parameters that appear to lead to unambivalent results, often in just a few typed lines on a page. Real symptoms, signs, feelings, and impulses aren’t so easy to understand.
  • Relying on other people’s work instead of following up with one’s own. Busy doctors and other decision-making clinicians sometimes rely solely on what they hear in treatment team meetings or read on brief checklists. Such communications are an important part of overall care and extend the doctor’s eyes and ears, but we must be careful about the validity and completeness of the information we use to make important decisions.
  • Relying on patients’ statements. This is a big one. I am continually amazed at the number of staff and clinicians whose safety decisions rely heavily on patients’ own statements that they’re not suicidal (or homicidal). Suicidal patients do not always tell nurses or clinicians the truth about their future plans. There are usually many other sources of information available for clinical decision making; use them.
  • Asking the patient a few short questions rather than communicating and corroborating with other history and observations. Risk assessment is not a 3-minute exercise.
  • Using the list once a day and assuming that’s enough. I recently reviewed a case in which a suicidality checklist was completed early each morning. The patient said he was not planning to kill himself, although the chart indicated that his mood and impulsivity fluctuated substantially from day to day, and sometimes from hour to hour. A few hours later, group leaders documented signs of suicide risk. Nevertheless, the early-morning checklist result was used more than 12 hours later as the basis for allowing him to work unaccompanied in a kitchen, where he drank a large quality of cleaning solution and almost died.

“Actuarial” instruments use historical data alone to place patients or inmates into groups of greater, or lesser, concern. They don’t require a patient interview and usually depend largely on “static” characteristics (see below) of the person being assessed. Many have been validated on large groups; others have not. It is important to know whether or not a particular one (e.g., the VRAG, Static-99 or RRASOR) has been adequately validated on patients or inmates similar to the person being evaluated. Although such assessments are popular because of their simplicity, low cost, and concrete results, overreliance on them is common and recent studies have questioned their accuracy, particularly in correctional settings. 5

Actuarial instruments convey a superficial impression of science and objectivity which is welcome as we wrestle with clinical nuance and emotion (and particularly as clinicians and the institutions in which they work strive to reduce their liability). Factor weighting and formulas make the results seem legitimate when they may or may not be. Brevity and simplicity make some instruments so cheap and easy to use that they are routinely included in evaluations whether or not their results are valid. Once such a result becomes part of a patient’s or inmate’s official record, it is likely to influence future evaluators for a long time.

Static and Dynamic Factors. Actuarial instruments, and some checklists, rely heavily on assumptions about certain items in the patient’s history that do not change (i.e., are “static”), that those items are relevant to the behavior being predicted or risk being assessed, and that current or future factors (such as treatment or supervision) will not affect risk. One’s gender and prior job history, for example, are immutable. Other items that are fairly indelible, yet have great effect on the results of many actuarial assessments, include such things as history of past violence or arrests and socioeconomic status. Although these are often statistically associated with accurate group predictions of assault or recidivism, static factors alone (as expressed in the instruments now available) should not be one’s primary basis for individual prediction.

A 35-year-old man had a history of sudden angry outbursts which had led to several serious assaults, arrests, and involuntary hospitalizations. Diagnosed with schizophrenia and schizoaffective disorder at various times over the years, he was noted to have a normal gross neurological exam and electroencephalogram (EEG).

He was eventually evaluated by a behavioral neurologist who performed a more specific kind of electroencephalography, found a subtle focal abnormality, and diagnosed an ictal instability which was successfully treated with a new medication. His EEG reverted to normal and he was discharged. He has had no more violent episodes during several years of follow-up.

Had the patient in the above vignette been assessed using solely static information, he would have found it very difficult to secure discharge. His violent history, repeated episodes, and the unpredictability of his episodes, coupled with his past diagnosis and gender, placed him in a very high violence risk category on any of a number of actuarial instruments, when in fact his risk was markedly decreased by dynamic factors such as a fresh clinical assessment, new findings, and a more appropriate treatment.

Foreseeability and Predictability

Negligence lawsuits, such as those alleging malpractice, often hinge on whether or not some damage (such as a suicide) was foreseeable. Colleagues often ask how anyone could be expected to predict a specific event as complicated as suicide, especially when it occurs days or weeks after the clinician saw the patient.

That’s not how the law defines “foreseeable” in most malpractice contexts. It doesn’t refer to predicting a specific act at a specific time, but rather to whether or not the doctor reasonably recognized, and adequately dealt with, a particular level of danger.

Moreover, we must deal with patients’ unpredictability. Unpredictability can be very dangerous. Only a fool would leave a small child alone with dangerous things, even if the chance of serious injury or death were relatively small. When seriously ill patients are unpredictable, clinicians and hospitals should be very cautious. When the environment into which they are placed is dangerous or unpredictable as well, that caution should increase.

The Last Word

Psychiatrists and other clinicians perform risk assessments in many clinical settings. The point is to use the right terms and goals (e.g., risk assessment rather than specific prediction) and to do it well.

FOOTNOTES

*“Clear and convincing” evidence is the burden of proof required of the State in almost all civil commitment proceedings.
Cited Here

†Note the important difference between first-line screeners, who take information and have a threshold for reporting it to others, and triage clinicians, who make rapid decisions about condition and referral for emergency care (and thus should be among the most experienced clinicians available).
Cited Here

REFERENCES

1. Haim R, Rabinowitz J, Lereya J, et al. Predictions made by psychiatrists and psychiatric nurses of violence by patients. Psychiatr Serv 2002; 53:622–4.
2. Hoptman MJ, Yates KF, Patalinjug MB, et al. Clinical prediction of assaultive behavior among male psychiatric patients at a maximum-security forensic facility. Psychiatr Serv 1999; 50: 1461–6.
3. Faust D, Ziskin J. The expert witness in psychology and psychiatry. Science 1988; 241:31–5.
4. Norko M, Baranoski M. Understanding risk assessment. Course, Annual Meeting of the American Academy of Psychiatry and the Law, Newport Beach, CA, October, 2002.
5. Barbaree HE, Seto MC, Langton CM, et al. Evaluating the predictive accuracy of six risk assessment instruments for adult sex offenders. Crim Justice Behav 2001; 28:490–521.
Copyright © 2003 Wolters Kluwer Health, Inc. All rights reserved.