HEALTH INFORMATION TECHNOLOGY (HIT) can help close the gap between guideline-recommended and actual care (Bodenheimer & Grumbach, 2003 ; Institute of Medicine, 2001). Despite the implementation of HIT through the Economic and Clinical Health Act in the United States, small- to medium-sized independent practices are lagging behind in its adoption, including the use of electronic health records (EHRs) (Torda et al., 2010). There are barriers to the adoption and meaningful use of EHRs, including cost, time, perceived lack of usefulness, data transition, facility location, and implementation issues (Kruse et al., 2016). In contrast, facilitators for EHR adoption include efficiency, quality, data access, perceived value, ability to transfer information, and incentives (Kruse et al., 2016). The combination of clinical decision support (CDS) systems and EHRs may facilitate HIT adoption and improve documentation and quality of care (Berner, 2009). EHRs can facilitate the integration of CDS systems into clinical practice, which in turn enables EHR users to access guidelines at the point of care (POC) (National Quality Forum, 2010).
Results reporting the benefits of EHR-based CDS systems for the management of chronic diseases, such as diabetes, are mixed and often incomplete. Successful CDS systems must provide the “right information to the right person in the right format through the right channel at the right time” (Berner, 2009 ; Karsh, 2009 ; Osheroff et al., 2007). As stated by the Patient-Centered Primary Care Collaborative, the patient-centered medical home (PCMH) model, which was introduced to improve primary care, encompasses patient-centered, comprehensive, coordinated care; accessible services; and quality and safety (Patient-Centered Primary Care Collaborative, 2018). Participation is being actively encouraged using financial incentives (Patient-Centered Primary Care Collaborative, 2018). More than 12 000 practices, with more than 60 000 clinicians, are recognized by the National Committee for Quality Assurance's (NCQA) PCMH evaluation program, and more than 100 payers support this recognition through financial incentives or coaching (NCQA, 2018). The team-based PCMH model coupled with CDS may help improve the management of chronic diseases because it allows CDS to be directed to staff who can identify and address gaps in care (Rittenhouse & Shortell, 2009).
The objective of the present real-world study was to examine the impact of POC CDS on diabetes management in small- to medium-sized independent primary care practices that had adopted the PCMH model of care (Rittenhouse & Shortell, 2009). We used quantitative measures to determine whether patients enrolled in primary care practices that utilize CDS have better glycemic and lipid control than those from practices without CDS. We used qualitative measures to understand the main facilitators and barriers to implementing CDS and achieving optimal diabetes management.
Study design, setting, and practice eligibility
DECIDE (DEcision support in the context of the patient-Centered medical home to Improve management of diabetes for primary care offices in DElaware) was a prospective, 1-year, cluster-randomized, longitudinal study. The main setting was small- to medium-sized independent primary care practices in Delaware that were already participating in statewide PCMH projects, of which there were 39 at the time the study was initiated. In addition, 10 offices in Maryland that were in a joint Delaware-Maryland Accountable Care Organization (which assisted offices to implement PCMH principles) were also eligible. Only offices that already had EHR systems in place were eligible (all 49 at study start); this enabled them to focus on implementation of CDS during the study, and ensured practices would have historical data in their EHRs that could be used for pre- and postcomparisons. There were 8 pediatric-only practices, which were excluded. Practices already using a robust POC CDS system were ineligible for inclusion in the study, although, of the offices that responded to the recruitment call, none used robust CDS systems. Practice eligibility required that EHRs could connect with the POC CDS and that the study team would work with the primary care practice, their information technology (IT) staff, and the POC CDS provider to determine interoperability—“the ability of computer systems or software to exchange and make use of information.” Overall, 41 practices were eligible for inclusion, of which 15 agreed to participate, and 12 were randomized. The research team obtained approval for the study from the Quorum Institutional Review Board (https://www.quorumreview.com).
Patients included were aged 18 to 75 years with a diagnosis of diabetes (International Classification of Diseases, Ninth Revision, Clinical Modification [ICD-9-CM], diagnosis codes: 250) (Centers for Disease Control and Prevention, 2018) who were “active” at a participating practice. Although the focus of the intervention was type 2 diabetes, we did not exclude persons with type 1 or other diabetes since the primary care offices did not distinguish type 2 diabetes from type 1 or other diabetes in their ICD coding. Within this population, there was a subgroup of high-risk patients with ischemic vascular disease, including those with a diagnosis of coronary artery disease (ICD-9-CM diagnosis codes: 410.00-414.99), cerebrovascular disease (430.00-438.99), peripheral vascular disease (443.89-443.99), or aortic aneurysm (441.00-441.99). Of note, the United States switched from ICD-9 to ICD-10 during the study, but the software was able to transpose between these 2 editions. “Active” patients were those who were listed as active in the EHR and who had 1 or more office visits within the 18 months prior to the study. Patients were not required to have an office visit during the study since part of the intention of the intervention was to bring patients to the office for care.
Twelve practices were randomized into 2 groups of 6 using clustered randomization to receive either POC CDS systems implemented as an add-on product to their EHRs (CDS intervention group) or no intervention (control group). Practices in the control group had software installed to allow collection of data for the study, but not to receive active clinical intervention. This ensured that data were standardized for the 2 groups and prevented bias due to differences in data collection or definitions. Clustered randomization was based on practice size (no minimum) to reduce bias by office and by US state. This required collection of preenrollment surveys that provided precise information on practice characteristics. There was 1 cluster of practices from the state of Maryland and 5 clusters of practices from the state of Delaware. Following randomization, all intervention offices had a baseline visit by the principal investigator and project manager (or assistant project manager) before implementation of the CDS system.
POC CDS was provided by third-party software (the Crimson Care Registry, The Advisory Board Company, Washington, DC), which, at the time of the study, was being used in more than 400 unique practices involving more than 3 million patient encounters. Use of this third-party CDS software system allows uniform protocols to be applied across multiple unaffiliated primary care practice settings using a variety of EHR products, thereby eliminating any bias related to a particular EHR. Protocols within the CDS were aligned with the 2012 joint guidelines of the American Diabetes Association/European Association for the Study of Diabetes (Inzucchi et al., 2012), which were the current guidelines at the time of the study. All practices were required to include protocols for glycated hemoglobin A1c (A1C) and low-density lipoprotein cholesterol (LDL-C). The A1C protocol was 1 or more A1C test or more within the past 6 months, with the most recent A1C (within 12 months) less than 7.0% (or <7.5% for patients with ischemic vascular disease, chronic renal disease, or other microvascular complications). The LDL-C protocol was completion of 1 or more lipid profiles within the past 12 months, with the most recent LDL-C level (within 12 months) less than 100 mg/dL. Practices could also choose to include additional protocols in other clinical decision areas, including foot and eye examinations, microalbumin testing, pneumococcal and influenza immunizations, and preventive care.
The CDS system generated reports for the practice staff before each appointment that contain patient-specific recommendations at the POC, indicating when tests were needed or diabetes control was suboptimal. A sample report is shown in Figure 1. The CDS system was goal driven rather than just process driven. For example, if a patient with diabetes was being seen for an office visit (regardless of the reason for the visit) and the EHR did not have evidence of an LDL-C being completed within the previous year for that patient, the report would suggest that the patient should have a lipid panel ordered. If the most recent LDL-C was more than 100 mg/dL, the report would suggest that the patient's lipids were not optimally controlled. Each office decided which recommendations were directed to the clinicians and which to other staff. Those directed to other staff could then be actioned by them to save the clinician's time. The CDS system additionally generated retrospective reports regarding quality of care for audit and feedback, and showed the percentage of patients who were overdue for tests or services, or whose disease control was suboptimal.
Quantitative outcome measurements
Quantitative data were collected over a 1-year follow-up period and comparisons made between the CDS intervention and control groups. The primary endpoint was reduction in A1C. Secondary endpoints were reduction in LDL-C and the percentage of patients who achieved: their personalized A1C goal (<7.5% for patients with microvascular complications or <7.0% for those without); A1C less than 7.0% or 9.0% or less (NCQA, 2016); LDL-C less than 100 mg/dL (Institute of Medicine, 2001).
Qualitative evaluation methods
Barriers to and facilitators of successful implementation of CDS that achieved optimal diabetes management in the context of the PCMH were examined qualitatively by interviewing clinicians and staff in the CDS group. The question protocol used to conduct the interviews, provided in the Supplemental Digital Content, available at: http://links.lww.com/JACM/A84, included 15 questions (mainly relating to current practices for diabetes care and whether diabetes care should be incorporated into appointments for other conditions) that were asked at baseline and endpoint; and 15 on the CDS system that were only asked at endpoint. The baseline questionnaire was developed by the principal investigator, with some consultation with an expert in survey development from the University of Colorado, Denver. The endpoint questionnaire was also developed by the principal investigator. Baseline interviews were conducted approximately 1 to 2 months before CDS (or control) implementation in each office; endpoint interviews were conducted approximately 1 to 2 months after the 1-year follow-up period.
At least 1 key staff member from each office was interviewed for the baseline survey for the intervention and control arms. The offices decided who was most appropriate to answer the questionnaires. All offices chose at least 1 physician and 1 staff member. At baseline, no offices chose to have a second physician or staff member answer questions. The endpoint survey only included the intervention offices. In 4 offices, at least 1 provider and 1 key staff member were interviewed; in 1 office, only 1 physician was interviewed. At endpoint, 2 offices chose to have a second physician answer the questionnaire and 2 offices chose to have a second staff member answer the questionnaire. Interviews included discussion of patient and system factors that impeded optimal care, using the concept of “patient-centered care” (Inzucchi et al., 2012), and how the CDS tools helped overcome barriers.
Patient data were de-identified prior to extraction from the practice records. The sample size calculation was based on an assumption of 12 primary care practices, with 30 patients with diabetes per practice, required to provide 90% power to detect a 0.3% difference in A1C with a standard deviation of 0.8 and assuming an intraclass correlation of 5%. Bivariate analyses were conducted to characterize the data and descriptively compare outcome measures for patients in the 2 groups. χ2 and t tests were used to analyze characteristic and numeric variables, respectively. Multivariate regression analyses of glycemic and lipid control were used to control for potential confounding factors, including baseline glycemic control and baseline LDL-C, as appropriate, clustering of patients with clinicians, and clustering of clinicians within practices.
Staff at each site were interviewed and a research assistant summarized the interview transcripts. Dr Gill reviewed and interpreted all results and compiled the final summary. No formal qualitative analysis was undertaken.
Of the 15 offices (67 clinicians) that agreed to participate, 3 were excluded prior to randomization due to failed interoperability (ie, their EHRs were unable to connect with the CDS). Two offices, where failed interoperability was not detected until after randomization, were subsequently excluded from the analysis. Of the remaining 10 offices (52 clinicians), 5 (23 clinicians) were randomized as controls. The 10 offices included 49 970 active patients aged 18 to 75 years, of which 6386 met eligibility criteria for analysis. Of these, 4484 patients were in the CDS group and 1902 in the control group. This imbalance was due to the clustered randomization process. Although offices were cluster randomized based on number of clinicians, by chance, the control offices had fewer patients per clinician and fewer patients with diabetes.
Patient demographics and baseline characteristics are summarized in the Table. Patients in the CDS group were significantly older with better glycemic control and lower LDL-C than those in the control group. The 2 groups were similar in terms of gender and diabetes complications.
In the A1C subgroup (2041 CDS and 723 control patients with baseline and follow-up A1C), there was no significant difference in A1C change from baseline between the 2 groups (CDS −0.08% ± 1.15% vs control −0.14% ± 1.51%; P = .41). After controlling for baseline differences, the CDS group had a greater reduction in A1C, with an adjusted between-group difference of 0.12% (95% confidence interval [CI] 0.02-0.22; P = .02; Figure 2A).
In the LDL-C subgroup (2793 CDS and 931 control patients with baseline and follow-up LDL-C), patients in the CDS group had a significantly greater decrease in LDL-C (−4.35 ± 24.54 mg/dL vs −1.70 ± 26.16 mg/dL; P = .0067). The CDS group also had a significantly greater reduction in LDL-C in multivariate analysis, with an adjusted between-group difference of 3.57 mg/dL (95% CI 1.80-5.34; P < .0001; Figure 2B).
In the A1C subgroup, patients in the CDS group had 52% greater odds of achieving personalized A1C goals (adjusted odds ratio [aOR] 1.52, 95% CI 1.24-1.86; P < .0001) and 56% greater odds of achieving A1C less than 7.0% (aOR 1.56, 95% CI 1.27-1.91; P < .0001) than those in the control group (Figure 3). There was no difference in the percentage of patients with A1C 9.0% or less (aOR 1.07, 95% CI 0.75-1.52; P = .71). In the LDL-C subgroup, the CDS group had 34% greater odds of achieving the LDL-C goal (aOR 1.34; 95% CI 1.11-1.61; P = .002) (Figure 3).
At baseline, few offices had any system in place for incorporating a team-based approach into clinical decision-making. All practices, regardless of randomization assignment, reported that the responsibility for ordering or implementing a test was solely with the clinician (physician, nurse practitioner, or physician assistant). However, by the end of the study, all 5 practices in the CDS group reported that staff checked the need for an A1C test. Two practices, neither of which were doing POC A1C testing prior to the study, implemented standing orders for the medical assistant to conduct a POC A1C test at patient visits, if necessary. During qualitative interviews, participants agreed that CDS and PCMH were important mechanisms to improve quality of care. At endpoint, 4 of the 5 intervention offices reported using the automated alerts provided by the intervention (see the top left of Figure 1 for an example of the alerts produced prior to the appointment), at least to some extent, but 3 reported problems (eg, inaccuracies) with the alerts, which limited their value. Problems primarily revolved around the potential inaccuracy of alerts (eg, lack of records of testing performed by specialists, such as endocrinologists) or inaccuracies in the communication of data between the EHR and the CDS. Such inaccuracies resulted in additional work to confirm results in patient charts; this resulted in 1 practice abandoning the use of alerts halfway through the study. Barriers that impeded the full implementation of the CDS system included time and reimbursement, with insurers not paying for the staff and time required to implement team-based care. Despite this, participants were generally positive that, as CDS systems become more accurate and payers compensate adequately for CDS and PCMH, they would have a positive impact on quality of care.
The DECIDE study examined whether CDS has a positive impact on quality measures for adults with diabetes, when used in the context of the PCMH in small- to medium-sized primary care practices. Quantitative measures showed that the use of the electronic CDS system resulted in small but statistically significant reductions in both A1C and LDL-C. There were also statistically significant, clinically meaningful increases in the odds of achieving personalized A1C and LDL-C goals in the CDS group.
Prior randomized controlled studies of electronic CDS systems have shown mixed results for diabetes care quality measures. Several studies have shown improvements in providers ordering appropriate tests (Demakis et al., 2000 ; Lobach & Hammond, 1997 ; Meigs et al., 2003 ; Montori et al., 2002), but no improvements in metabolic control (Meigs et al., 2003 ; Montori et al., 2002). A more recent randomized trial found that EHR-based CDS resulted in significantly better mean reduction in A1C (−0.26%) and better systolic blood pressure control, but no improvement in LDL-C (O'Connor et al., 2011). However, this study was conducted in a single large health care system with physicians using a common EHR. The previous studies mentioned were also conducted either in single large health care systems (Demakis et al., 2000 ; Montori et al., 2002) or in single clinics (Lobach & Hammond, 1997 ; Meigs et al., 2003).
What is unique about the present study is that it was conducted in independent small- to medium-sized primary care practices, which is where the majority of primary care is delivered in the United States (Kane, 2017 ; Mostashari, 2016). Previous studies of electronic CDS in these settings have not necessarily shown favorable results (Gill et al., 2009). One reason could be that previous studies of CDS have targeted the alerts and reminders to clinicians, who are often too busy to attend to these alerts (Nanji et al., 2018 ; Schnipper et al., 2008). Our study targeted the alerts (eg, A1C test due) to nonclinicians (ie, nurses and medical assistants) in the context of the PCMH. These staff, who routinely work alongside a clinician for each patient visit, received the alerts prior to the patient arriving. The PCMH relies on a “team-based” approach, where care is delivered by staff as well as physicians, and occurs both during and outside of office visits (Rittenhouse & Shortell, 2009). This study suggests that, when done in the context of the PCMH, electronic CDS can improve diabetes care even in small, independent practices. Such improvements are increasingly important in the era of value-based and pay-for-performance reimbursement.
Although this study did show statistically significant improvements in diabetes care, one might question whether mean reductions of 0.12% (A1C) and 3.6 mg/dL (LDL-C) are clinically meaningful. There are several reasons why the results may not have been more robust. The present study was conducted in real-world conditions in a broad patient population, in which delivery and implementation of the intervention required complex logistics and a high level of technical sophistication. As a result, not all data were captured in the EHR, and critical information involving testing by another clinician was not always transferred to the primary care office. Such issues sometimes resulted in CDS alerts not accurately reflecting the most recent test results, despite comprehensive efforts by the investigators to assist practice teams and conduct quality checks. Interoperability problems between the office EHR and the study CDS system also led to 5 of 15 initially recruited offices being ineligible. It also restricted some outcome measures for the study (eg, the “likelihood of having tests up to date” could not be studied due to problems with interoperability). This suboptimal interoperability is not an isolated issue in the context of the present study; rather, it is a major barrier to the meaningful implementation of HIT to support clinical decision-making across the United States (Samal et al., 2016). Finally, it needs to be noted that, although the absolute differences were small for both A1C and LDL between these 2 groups, patients in the CDS group had over 50% greater odds of achieving personalized A1C goals or A1C less than 7%, and 34% greater odds of achieving their LDL-C goal.
Our qualitative findings point to areas where the process could be modified to better implement the CDS system and potentially result in more robust improvements. Inadequate payment for PCMH from insurers creates barriers to practices being able to fully implement electronic CDS and team-based care, which likely diminishes its impact. Physicians and staff in our study felt that insurers were not paying for staff training or the time to run retrospective reports and contact patients. This could be why retrospective CDS reports were markedly less used than POC alerts, and why the use of retrospective reports did not increase, and in 1 case declined, over the study. However, with more financial support, use of EHR-based CDS systems could grow in independent practices, which could help overcome barriers to optimal diabetes management. Meigs and Solomon reported that EHR systems needed to be more user-friendly and adaptable to the workflow of individual clinics, and also that clinicians believe that EHR use does not improve the quality of patient care (Meigs & Solomon, 2016), which is not supported by our own findings. We also noted barriers related to office workflow; however, 1 reason for implementing CDS in the context of PCMH is that a team-based approach increases the likelihood that processes will be completed. This system relies on lower-level staff having responsibility for basic decisions, with complex decisions reserved for clinicians. Our interviews suggested that practices moved toward increased decision-making by ancillary staff, including POC A1C testing based on standing orders in some offices. These team-based changes required time and training, indicating that appropriate compensation needs to be provided to support the PCMH model. The system must ensure that results are captured appropriately in the EHR, and ultimately the CDS. Otherwise, this impacts on its utility for clinical decision-making and its inclusion in quality measure determination.
Another limitation of this study is the imbalance between the study groups, including larger numbers of patients with diabetes and significantly better baseline A1C control in the CDS group versus the control group. These differences could reflect different practice patterns. For example, clinicians in the CDS group may have been more proactive with diabetes care even prior to the study. Although these imbalances were adjusted for using multivariate analyses, there may have been some unmeasured differences that we were not able to adjust for. Furthermore, patients with both baseline and follow-up A1C measurements may have been more motivated than those with one or both measurements missing, although this is likely to have affected both groups similarly. In addition, it was not possible to quantify how often the CDS was actually used, nor to identify whether patients moved between practices (and therefore potentially between systems).
However, the main limitation of this study was the amount of missing A1C and LDL-C data. Although some of these missing data were due to patients not undergoing the relevant tests at suitable time points (as would be expected in a real-world study), most were due to problems of interoperability. A lack of interoperability also resulted in the exclusion of 5 offices. These findings are valuable as they highlight potential problems that should be considered prior to implementing a CDS system. It should be recognized that the implementation of a successful CDS system is complex, and excellent IT support is required. External support may be needed to help small offices without an expert IT team to build and customize EHR and CDS systems. In addition, all staff will require proper training to use the system correctly, which has obvious cost implications. Lastly, the data need to be entered correctly into the EHR in order for the CDS system to recognize them and use them successfully. Overall, the introduction of CDS systems into small, independent practices can be hampered by a lack of resources, as well as the suboptimal interoperability across IT systems. Hence, further work aimed at making these systems more interoperable could be beneficial, especially in the setting of small independent practices that are the backbone of the US health care system.
This prospective, cluster-randomized, real-world study supports electronic CDS for improving diabetes management in the context of the team-based PCMH care model. However, the study has also identified difficulties in implementing such a system in small- to medium sized practices, and may thereby provide valuable information to those considering setting up such a system.
Berner E. S. (2009). Clinical decision support
systems: State of the art. Agency for Healthcare Research and Quality (AHRQ) publication no. 09-0069-EF. Rockville, MD: AHRQ.
Bodenheimer T., Grumbach K. (2003). Electronic technology: A spark to revitalize primary care? JAMA, 290(2), 259–264.
Centers for Disease Control and Prevention. (2018). International Classification of Diseases, Ninth Revision, Clinical Modification. Retrieved February 28, 2018, from https://www.cdc.gov/nchs/icd/icd9cm.htm
Demakis J. G., Beauchamp C., Cull W. L., Denwood R., Elsen S. A., Lofgren R., Henderson W. G. (2000). Improving residents' compliance with standards of ambulatory care: results from the VA Cooperative Study on Computerized Reminders. JAMA, 284(11), 1411–1416.
Gill J. M., Chen Y. X., Glutting J. J., Diamond J. J., Lieberman M. I. (2009). Impact of decision support in electronic medical records on lipid management in primary care. Population Health Management, 12(5), 221–226.
Institute of Medicine. (2001). Crossing the quality chasm: A new health system for the 21st century. Washington, DC: National Academy Press.
Inzucchi S. E., Bergenstal R. M., Buse J. B., Diamant M., Ferrannini E., Nauck M., Matthews D. R. (2012). Management of hyperglycemia in type 2 diabetes: A patient-centered approach: position statement of the American Diabetes Association (ADA) and the European Association for the Study of Diabetes (EASD). Diabetes Care
, 35(6), 1364–1379.
Kane C. K. (2017). Policy research perspectives updated data on physician practice arrangements: Physician ownership drops below 50 percent. Chicago, IL: American Medical Association.
Karsh B.-T. (2009). Clinical practice improvement and redesign: How change in workflow can be supported by clinical decision support
. Agency for Healthcare Research and Quality (AHRQ) publication no. 09-0054-EF. Rockville, MD: AHRQ.
Kruse C. S., Kothman K., Anerobi K., Abanaka L. (2016). Adoption factors of the electronic health record: A systematic review. JMIR Medical Informatics, 4(2), e19.
Lobach D. F., Hammond W. E. (1997). Computerized decision support based on a clinical practice guideline improves compliance with care standards. American Journal of Medicine, 102(1), 89–98.
Meigs J. B., Cagliero E., Dubey A., Murphy-Sheehy P., Gildesgame C., Chuey H., Nathan D. M. (2003). A controlled trial of web-based diabetes disease management: the MGH diabetes primary care improvement project. Diabetes Care
, 26(3), 750–757.
Meigs S. L., Solomon M. (2016). Electronic health record use a bitter pill for many physicians. Perspectives in Health Information Management, 13, 1d.
Montori V. M., Dinneen S. F., Gorman C. A., Zimmerman B. R., Rizza R. A., Bjornsen S. S., ... Translation Project Investigator Group. (2002). The impact of planned care and a diabetes electronic management system on community-based diabetes care
: The Mayo Health System Diabetes Translation Project. Diabetes Care
, 25(11), 1952–1957.
Mostashari F. (2016). The paradox of size: How small, independent practices can thrive in value-based care. The Annals of Family Medicine, 14(1), 5–7. doi:10.1370/afm.1899.
Nanji K. C., Seger D. L., Slight S. P., Amato M. G., Beeler P. E., Her Q. L., Bates D. W. (2018). Medication-related clinical decision support
alert overrides in inpatients. Journal of the American Medical Informatics Association, 25(5), 476–481.
National Quality Forum. (2010). Driving quality and performance measurement—a foundation for clinical decision support
: A consensus report. Washington, DC: Author.
O'Connor P. J., Sperl-Hillen J. M., Rush W. A., Johnson P. E., Amundson G. H., Asche S. E., Gilmer T. P. (2011). Impact of electronic health record clinical decision support
on diabetes care
: A randomized trial. The Annals of Family Medicine, 9(1), 12–21.
Osheroff J., Teich J., Middleton B., Steen E., Wright A., Detmer D. (2007). White paper: A roadmap for national action on clinical decision support
. Journal of the American Medical Informatics Association, 14(2), 141–145.
Rittenhouse D. R., Shortell S. M. (2009). The patient-centered medical home
: Will it stand the test of health reform? JAMA, 301(19), 2038–2040.
Samal L., Dykes P. C., Greenberg J. O., Hasan O., Venkatesh A. K., Volk L. A., Bates D. W. (2016). Care coordination gaps due to lack of interoperability in the United States: A qualitative study and literature review. BMC Health Services Research, 16, 143.
Schnipper J. L., Linder J. A., Palchuk M. B., Einbinder J. S., Li Q., Postilnik A., Middleton B. (2008). “Smart forms” in an electronic medical record
: Documentation-based clinical decision support
to improve disease management. Journal of the American Medical Informatics Association, 15(4), 513–523.
Torda P., Han E. S., Scholle S. H. (2010). Easing the adoption and use of electronic health records in small practices. Health Affairs (Millwood), 29(4), 668–675.