Secondary Logo

Journal Logo

Differentiating Standardized Clinical Assessment and Management Plans From Clinical Practice Guidelines

Farias, Michael MD, MS, MBA; Friedman, Kevin G. MD; Lock, James E. MD; Newburger, Jane W. MD, MPH; Rathod, Rahul H. MD

doi: 10.1097/ACM.0000000000000783
Letters to the Editor
Free

Pediatric cardiology fellow, Department of Cardiology, Boston Children’s Hospital, and Department of Pediatrics, Harvard Medical School, Boston, Massachusetts.

Staff cardiologist, Department of Cardiology, Boston Children’s Hospital, and Department of Pediatrics, Harvard Medical School, Boston, Massachusetts.

Cardiologist-in-chief and professor of pediatrics, Department of Cardiology, Boston Children’s Hospital, and Department of Pediatrics, Harvard Medical School, Boston, Massachusetts.

Associate cardiologist-in-chief and professor of pediatrics, Department of Cardiology, Boston Children’s Hospital, and Department of Pediatrics, Harvard Medical School, Boston, Massachusetts.

Staff cardiologist, Department of Cardiology, Boston Children’s Hospital, and Department of Pediatrics, Harvard Medical School, Boston, Massachusetts; Rahul.Rathod@childrens.harvard.edu.

Disclosures: None reported.

Back to Top | Article Outline

To the Editor:

We would like to thank Sox and Stewart1 for their extensive commentary on our Standardized Clinical Assessment and Management Plan (SCAMP) initiative,2 an effort that now extends to more than 60 SCAMPs touching nearly all aspects of medical care across several dozen network institutions. Their commentary focuses on the similarities between clinical practice guidelines (CPGs) and SCAMPs, contending that the major difference is that SCAMPs encourage deviations from the plan, and thus use insights from variation to improve care. In one sense, Sox and Stewart are exactly right: SCAMPs are, at the outset, no different than CPGs in that they rely on established literature and “expert” opinion to create a standardized care algorithm. The major advance of SCAMPs is not, in fact, the encouragement of deviations as a tool for learning. Rather, it is the focused prospective collection of relevant clinical data, using targeted data statements that attempt to predict how the SCAMP will affect an episode of care. This collection of a limited data set, based on known uncertainties in an episode of care, is fundamentally Bayesian in nature. Since the data collection (including, but by no means limited to, deviation data) is tightly focused, the data can be collected and analyzed in a time frame unprecedented in medical care.

Until the first data are analyzed, a SCAMP is a CPG. After the clinicians receive their first analysis, everything changes: The CPG becomes a SCAMP. Clinicians learn from the deviations as well as the targeted information collected and can improve the SCAMP using persuasive data (not, as is the case for CPGs, expert opinion or “conclusive” data). This process continues and even accelerates, and clinicians are invariably surprised by prior clinical beliefs that are shown to be flawed based on real data from their own patients.

SCAMPs are becoming increasingly popular among thoughtful academic clinical leaders who practice medicine on a day-to-day basis. The innovation responsible for this acceptance goes beyond the assessment of deviations—it is the use of targeted data statements to direct data collection and analysis, predict what happens in real-life medicine, and permit a continuous-improvement process. The focus of the commentary by Sox and Stewart indicates that we have not done very well in communicating why SCAMPS work where other efforts have failed. SCAMPs provide a framework that facilitates the collection and analysis of targeted relevant clinical data. A well-designed SCAMP (note that not all SCAMPS have been well designed, although with six years and 50,000 patients’ worth of experience, we are getting better) will result in persuasive data that not only promote rapid improvement in care but also generate excitement among front-line clinicians eager to learn more. This excitement is a fundamental requirement if we are to change our health care system for the better.

Michael Farias, MD, MS, MBA

Pediatric cardiology fellow, Department of Cardiology, Boston Children’s Hospital, and Department of Pediatrics, Harvard Medical School, Boston, Massachusetts.

Kevin G. Friedman, MD

Staff cardiologist, Department of Cardiology, Boston Children’s Hospital, and Department of Pediatrics, Harvard Medical School, Boston, Massachusetts.

James E. Lock, MD

Cardiologist-in-chief and professor of pediatrics, Department of Cardiology, Boston Children’s Hospital, and Department of Pediatrics, Harvard Medical School, Boston, Massachusetts.

Jane W. Newburger, MD, MPH

Associate cardiologist-in-chief and professor of pediatrics, Department of Cardiology, Boston Children’s Hospital, and Department of Pediatrics, Harvard Medical School, Boston, Massachusetts.

Rahul H. Rathod, MD

Staff cardiologist, Department of Cardiology, Boston Children’s Hospital, and Department of Pediatrics, Harvard Medical School, Boston, Massachusetts; Rahul.Rathod@childrens.harvard.edu.

Back to Top | Article Outline

References

1. Sox HC, Stewart WF. Algorithms, clinical practice guidelines, and standardized clinical assessment and management plans: Evidence-based patient management standards in evolution. Acad Med. 2015;90:129–132
2. Farias M, Friedman KG, Lock JE, Newburger JW, Rathod RH. Gathering and learning from relevant clinical data: A new framework. Acad Med. 2015;90:143–148
© 2015 by the Association of American Medical Colleges