The procedure of estimating a person's ability (ie, the location of each person's logit scores for each quality subscale) was conducted with WINSTEPS software (version 220.127.116.11).27 First, category frequencies and average measures were examined for each response option. Then, rating scale step calibrations and category fit statistics were assessed. Step calibrations should increase for at least 1.4 logits but no more than 5.0 logits.27 Subsequently, the probability curve for each response category was examined. Each response category should exhibit a probability of at least 0.5 of being selected. Categories were collapsed to create uniform frequency distributions when the aforementioned requirements were not met. Furthermore, Infit and Outfit mean-square statistics (MNSQ) were evaluated. Appropriate values range between 0.6 and 1.4 for rating scale data27 and between 0.8 and 1.3 for dichotomous data.28 Finally, model reliability was also estimated. The person scores from the best-fitting model were used as independent variables in the subsequent analyses. All Rasch analyses were conducted using the sampling weights through the PWEIGHT command available on WINSTEPS. Exploratory bivariate analyses (eg, χ2 tests, tests of correlations) between the dependent and independent and control variables were conducted using the survey weights. Multivariate weighted logistic regressions were conducted with all independent variables and the control variables that showed relationship of significance (P < .1) in the exploratory analyses. A level of significance of .01 (2-tailed) was used to account for the large sample size. The statistical software STATA 11.0 was used to perform logistic regression analyses. Taylor series linearization was utilized to obtain variance estimates.
A total of 19 738 persons 18 years and older completed telephone interviews for the CWF survey in 11 countries. The response rates for each country varied from a high of 42% in Sweden to a low of 9% in the Netherlands. For this study, participants who qualified and answered any of the 3 items measuring the outcomes (ie, medical error, medication error, or laboratory error) were included in the analysis. After excluding respondents with missing data on the independent variables, the final study sample included 9872 persons. Demographic information of subjects included in this is study can be found in the Appendix (see Supplemental Digital Content, available at: http://links.lww.com/QMH/A4).
Development and assessment of the perceived quality-of-care independent variables
A total of 19 738 persons 18 years and older completed telephone interviews for the CWF survey in 11 countries. Several indicators were used to determine if the data fit the Rasch model. The weighted counts and percentages displayed in Table 3 for the Likert-type scale indicate that all 4 categories were fully used by the respondents because each category satisfied the criterion for minimum counts of 10 observations. The observed average measure for a category represents the average level of endorsement (eg, level of satisfaction) of the persons who selected that category. Table 3 shows that the observed average measures increased with the category values. The Infit and Outfit MNSQ for all 4 categories fell between the accepted ranges of 0.6 to 1.4, indicating a good fit to the structure of the rating scale. The category thresholds and category measures increase in ascending order, indicating that respondents were able to properly differentiate the ordinal scale configuration. Table 3 also displays the summary of category functioning parameters for the 4 dimensions with dichotomous response options. The Infit and Outfit MNSQ for the dichotomous response options in all but the Communication of Care dimension fell within accepted ranges, indicating good fit to the dichotomous scale structure. A second Rasch analysis was conducted after reviewing the fit statistics of each one of the items in the Communication of Care dimension, which resulted in the deletion of the item “There was a time when you received a new prescription medication, and were not sure what it was for or when or how to take it.” After this second analysis, the Outfit MNSQ improved for the dichotomous scale from to 2.00 to 1.09.
Item quality was assessed using the item fit statistics. Table 4 displays the item fit statistics for each item in the 5 dimensions. “Measure” corresponds to the estimation of the item difficulty or “difficult to endorse” parameter. This represents the location of an item along the latent trait continuum; the greater the value of item difficulty, the lower the probability of the item being endorsed. Infit and Outfit MNSQ range of 0.6 to 1.4 is considered appropriate. In the Access to Care dimension, the item with the lowest measure was “Have one regular practice to obtain medical care.” This means that respondents were less likely to agree with this item. The hardest item to agree with was “There is existence of clinical staff (other than a doctor) involved in my health care.” For this item, the Outfit MNSQ is above the accepted range (>1.4), which means that it may be collecting some error. However, this item was retained in the model because the Infit MNSQ and the correlation values are within acceptable ranges.
Evidence showed that the item “The patient received a new prescription medication, and was not sure what it was for or when or how to take it” did not function optimally as an assessment in the Communication of Care dimension. The Outfit MNSQ exceeded the 1.4 threshold level. Moreover, the Outfit statistics surpassed 2, which suggests that the item may distort or degrade the measurement system. In addition, the item correlated poorly (0.2) with the other items in this dimension. Although respondents were more likely to endorse this item, it may not be part of the same construct. For these reasons, this item was excluded from the model. Examination of the results indicates that all items for both dimensions Coordination of Care and Providers' Respect for Patients' Preferences dimensions met all criteria of the Rasch model. Finally, the person reliability indexes for the sample were below the minimum accepted (0.6) for all dimensions (data not shown). This suggests that the items do not appear to be working properly to distinguish higher versus lower levels of perceived quality of care.
Relationship between patients' perceived health care quality and self-reported medical, medication, and laboratory errors
The second stage of the analysis in this study consisted of establishing the relationship between patients' perceived health care quality and self-reported medical, medication, and laboratory errors. For this, logistic regression models were built using the Rasch quality-of-care scores for each dimension (ie, independent variables) and self-reported medical errors, self-reported medication errors, and self-reported laboratory errors (ie, dependent variables). Initially, age, sex, education, income, medical services use, perceived health status, health care system type, number of chronic conditions, number of prescription drugs, and costs barriers were included in the models to control for potential confounding. However, variables not found to be significant in bivariate analyses were removed from the final models (Table 5).
It can be seen from the data in Table 5 that after controlling for relevant predictors, 4 dimensions of quality of care as perceived by patients—Access to Care (odds ratio [OR] = 0.99; 99% confidence interval [CI], 0.90-1.09), Care Continuity (OR = 1.05; 99% CI, 0.91-1.23), Communication of Care (OR = 0.95; 99% CI, 0.87-1.04), and Respect for Patients' Preferences (OR = 0.94; 99% CI, 0.88-1.01)—were not statistically significantly associated with patients' self-reporting medical errors. As Table 5 shows, Coordination of Care was statistically significantly associated with self-reported medical errors (OR = 0.60; 99% CI, 0.55-0.67). This means that an increase in the perceived level of Coordination of Care decreases the likelihood of patients' self-reporting medical errors, holding all other predictors constant. Similarly, Coordination of Care was statistically significantly associated with the self-reported medication errors (OR = 0.75; 99% CI, 0.67-0.85) and self-reported laboratory errors (OR = 0.61; 99% CI, 0.54-0.70).
By using a multidimensional approach to define quality of care, this study confirms that Coordination of Care is a predictor of self-reported health-related errors. Specifically, we found that when patients perceive lapses in communication among their providers and receive conflicting information from multiple health care stakeholders as measured with the items of the Coordination of Care scale (see Table 2), they are more likely to report medical, medication, and laboratory errors. The findings from this investigation support results from a number of other published studies that suggested that Coordination of Care is an important predictor of perceived patient safety. After adjusting for potentially important confounding variables, there was a statistically significant association between Coordination of Care and self-reported medical error, medication error, and laboratory errors. These results are consistent with those by Taylor and colleagues,29 who used a prospective cohort study of 223 hospitalized patients and after adjusting for sociodemographic variables and length of stay, patients' reporting care coordination deficiencies among staff were 4 times more likely to experience adverse or near miss events (OR = 4.4; 95% CI, 1.4-14.0).
In addition, the results of the current investigation agree with previous research based on the CWF survey. However, other studies differed in how Coordination of Care was defined. O'Hagan and colleagues examined the association of Coordination of Care and medical errors by examining the responses of participants to the question, “When you need care or treatment, how often does your general practitioner/regular doctor/the doctor know important information about your medical history?” as a measure of Coordination of Care.30 Results of a bivariate analysis showed a significant association between patients who indicated that their physician rarely or never knew important information about their medical history (bivariate analysis) and medical errors (P < .001).30 However, these results should be interpreted with caution because the investigators did not present information on this relationship after adjusting for other potential confounders. Lu and Roughead13 and Scobie31 measured poor coordination as the positive response to either unavailable test results or medical records at the time of appointment, or duplicate tests. In addition to these aspects, Schwappach included “receiving conflicting information from different providers” as measure of poor coordination of care.32 The current investigation adds to the existing literature by using a more complete construct of Coordination of Care because it also incorporates items that assess the level of miscommunication between the primary care provider and specialists: Have you experienced the following when seeing a specialist? (1) the specialist did not have basic medical information from your regular doctor about the reason for your visit or test results; and (2) after you saw the specialist, the regular doctor did seem informed and up-to-date about the care you got from the specialist. Finally, other studies using data from the CWF survey defined a medical error as a combination of any medical or medication error, which may prevent the detection of different associations for individual error types. This study was able to examine the effect of perceived coordination of care and self-reported laboratory errors, an aspect not explored by previous studies.
There may be several potential explanations for the observed relationship between perceived coordination of care and patient safety. Patients may detect mishaps involving poor coordination of care more easily. For instance, missing relevant patient information during the point of care is a frequent problem and obtaining such information often demands the interaction between the health care provider or administrator and the patient, which alerts patient to the possibility of a care quality failure or increase their critical assessment of quality. Finally, it is possible that the results could be explained by the confounding effect of other factors not measured. For example, peoples' overall satisfaction with their health system may impact both their perceptions of care coordination and perceived safety. Although satisfaction with the health care system was not included in models, the analyses did account for other traits that could also reflect satisfaction with the health care system such as Access to Care, Continuity of Care, Communication of Care, and Providers' Respect for Patients' Preferences, potentially minimizing the risk of bias.
Findings from this study did not provide evidence to support the association between perceived patient safety and the other 4 dimensions of quality of care: Communication of Care; Access to Care; Continuity of Care; and Respect for Patients' Preferences. Despite the lack of statistically significant results, there are several important aspects that should be considered for further research. Because of the limited availability of items, in the current study, the Communication of Care measure was constructed with items that only focused on communication about prescription medications. This may explain why it appears that people who reported better levels of Communication of Care were less likely to self-report medication errors (OR = 0.93; 99% CI, 0.85-1.02). Although this relationship was not significant at the .01 level, further investigations should explore this topic to find more conclusive evidence about the relationship between Communication of Care and patient safety.
Results of this study should be interpreted in light of the limitations of a cross-sectional research design. The statistical associations found cannot be established as evidence for causality but as an exploratory step toward causality. Therefore, conclusions about the temporal association between quality of care and patient safety cannot be established. For example, this investigation cannot determine whether experiencing a medication error led respondents to perceive that a coordination of care mishap occurred, or whether the perceptions of poor coordination of care commanded respondents to indicate that an error arose. Nonresponse bias might have influenced this study primarily due to survey nonresponse. The response rate in this study varied considerably among countries, as small as 9% for the Netherlands to 54% for Switzerland. Low response rate limits the generalizability of findings. While it is difficult to determine the extent of bias introduced as a result of refusal to participate by survey respondents, weighted analysis was conducted to account for and minimize survey nonresponse bias.
Findings from this study should be interpreted in light of respondents' ability to recall their health care experiences; the time interval between a health care encounter and the questions asked might influence the validity of responses. The questions of the CWF survey asked participants to recall events from the previous 2 years, which increases the window for recall bias. The patient safety and quality-of-care measures in the present investigation encompassed self-reported data in contrast to clinical data. Despite the provision of medical, medication error, and laboratory error definitions to respondents, the terms could be misunderstood, potentially increasing measurement error.
Health care systems are transitioning to a space of heightened transparency and accountability where payments for services are increasingly value-based. Thus, health care actors around the world have engaged in an active pursuit for innovative solutions to decrease errors. The majority of them, however, have focused mainly on the providers' role as opposed to those who ultimately receive care—and bear its consequences. Patients, from their unique viewpoint, can provide valuable insights on received care and play an important role in patient safety initiatives. Thus, patient engagement initiatives are essential in health care quality management, as they may be the most reliable reporters of some aspects of the health care process. This investigation showed evidence that supports the association between perceived coordination of care and self-reported medical, medication, and laboratory errors. As health care stakeholders continue to search for initiatives that improve care experiences and outcomes, this study's results emphasize the importance of guaranteeing integrated care.
1. Institute of Medicine. Patient Safety: Achieving a New Standard for Care. Washington, DC: National Academies Press; 2004.
2. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academies Press; 2001.
4. Institute of Medicine. To Err Is Human: Building a Safer Health System. Washington, DC: National Academies Press; 2000.
5. Van Den Bos J, Rustagi K, Gray T, Halford M, Ziemkiewicz E, Shreve J. The $17.1 billion problem: the annual cost of measurable medical errors. Health Aff (Millwood). 2011;30(4):596–603.
6. Kaiser Family Foundation. Americans as Health Care Consumers: An Update on the Role of Quality Information. Menlo Park, CA/Rockville, MD: Kaiser Family Foundation and Agency for Health Care Research and Quality; 2004.
7. Kaiser Family Foundation. Update on Consumers' Views of Patient Safety and Quality Information. Menlo Park, CA: Kaiser Family Foundation; 2008.
8. Conway J, Johnson BH, Edgman-Levitan S, et al. Partnering with patients and families to design a patient- and family-centered health care system: a roadmap for the future–a work in progress. Bethesda, MD: Institute for Family-Centered Care and Institute for Healthcare Improvement; 2006. http://www.ipfcc.org/pdf/Roadmap.pdf
. Accessed December 21, 2015.
9. Cooper J, Gaba D, Liang B, Woods D, Blum L. The National Patient Safety Foundation agenda for research and development in patient safety. Med Gen Med. 2000;2(3):E38.
10. VA National Center for Patient Safety. Patient Safety for Patients. Ann Arbor, MI: VA National Center for Patient Safety; 2012. http://www.patientsafety.va.gov
. Accessed July 31, 2012.
11. Vincent C, Coulter A. Patient safety: what about the patient? Qual Saf Health Care. 2002;11(1):76–80.
12. Millman EA, Pronovost PJ, Makary MA, Wu AW. Patient-assisted incident reporting: including the patient in patient safety. J Patient Saf. 2011;7(2):106.
13. Lu C, Roughead E. Determinants of patient-reported medication errors: a comparison among seven countries. Int J Clin Pract. 2011;65(7):733–740.
14. Schwappach DLB. Risk factors for patient-reported medical errors in eleven countries. Health Expect. 2014;17(3):321–331.
15. The Commonwealth Fund. Commonwealth Fund International Health Policy Survey. New York, NY: The Commonwealth Fund; 2010.
16. Rathert C, May DR, Williams ES. Beyond service quality: the mediating role of patient safety perceptions in the patient experience-satisfaction relationship. Health Care Manage Rev. 2011;36(4):359–368.
17. Gerteis M, Susan Edgman-Levitan S, Daley J, Delbanco TL, eds. Through the Patient's Eyes: Understanding and Promoting Patient-Centered Care. San Francisco, CA: Jossey-Bass; 1993.
18. Schoen C, Osborn R, Huynh PT, Doty M, Peugh J, Zapert K. On the front lines of care: primary care doctors' office systems, experiences, and views in seven countries. Health Aff. 2006;25(6):w555–w571.
19. Bond TG, Fox CM. Basic principles of the Rasch model. In: Applying the Rasch Model: Fundamental Measurement in the Human Sciences. Mahwah, NJ: Erlbaum; 2001:20–35.
20. Chang CH, Reeve BB. Item response theory and its applications to patient-reported outcomes measurement. Eval Health Prof. 2005;28(3):264–282.
21. Doorenbos AZ, Verbitsky N, Given B, Given CW. An analytic strategy for modeling multiple-item responses: a breast cancer symptom example. Nurs Res. 2005;54(4):229–234.
22. Hardin J, He Y, Javitz HS, et al. Nicotine withdrawal sensitivity, linkage to chr6q26, and association of OPRM1 SNPs in the SMOking in FAMilies (SMOFAM) sample. Cancer Epidemiol Biomarkers Prev. 2009;18(12):3399–3406.
23. Hays RD, Morales LS, Reise SP. Item response theory and health outcomes measurement in the 21st century. Med Care. 2000;38(9)(suppl):II28–II42.
24. Hobart JC, Cano SJ, Zajicek JP, Thompson AJ. Rating scales as outcome measures for clinical trials in neurology: problems, solutions, and recommendations. Lancet Neurol. 2007;6(12):1094–1105.
25. Olson JR, Belohlav JA, Cook LS, Hays JM. Examining quality improvement programs: the case of Minnesota hospitals. Health Serv Res. 2008;43(5p2):1787–1806.
26. Joumard I. Health Care Systems: Efficiency and Policy Settings. Paris, France: OECD; 2010.
27. Linacre JM. Optimizing rating scale category effectiveness. J Appl Meas. 2002;3(1):85–106.
28. Bond TG, Fox CM. Applying the Rasch Model: Fundamental Measurement in the Human Sciences. Mahwah, NJ: Erlbaum; 2001.
29. Taylor BB, Marcantonio ER, Pagovich O, et al. Do medical inpatients who report poor service quality experience more adverse events and medical errors? Med Care. 2008;46(2):224–228.
30. O'Hagan J, MacKinnon NJ, Persaud D, Etchegary H. Self-reported medical errors in seven countries: implications for Canada. Healthc Q. 2009;12:55–61.
31. Scobie A. Self-reported medical, medication and laboratory error in eight countries: risk factors for chronically ill adults. Int J Qual Health Care. 2011;23(2):182–186.
32. Schwappacha DLB. Frequency of and predictors for patient-reported medical and medication errors in Switzerland. Swiss Med Wkly. 2011;141:w13262.
33. Liu W, Manias E, Gerdtz M. Understanding medication safety in healthcare settings: a critical review of conceptual models. Nurs Inq. 2011;18(4):290–302.
Keywords:© 2016Wolters Kluwer Health | Lippincott Williams & Wilkins
laboratory error; medical error; medication error; quality of care