Skip Navigation LinksHome > October/December 2013 - Volume 38 - Issue 4 > The better model to predict and improve pediatric health car...
Health Care Management Review:
doi: 10.1097/HMR.0b013e31826119c3

The better model to predict and improve pediatric health care quality: Performance or importance–performance?

Olsen, Rebecca M.; Bryant, Carol A.; McDermott, Robert J.; Ortinau, David

Free Access
Article Outline
Collapse Box

Author Information

Rebecca M. Olsen, PhD, is Associate Professor of Public Health, Health Science and Human Performance, College of Natural and Health Sciences, University of Tampa, Florida. E-mail:

Carol A. Bryant, PhD, is Distinguished USF Health Professor in Community and Family Health, Co-Director, Florida Prevention Research Center, College of Public Health, University of South Florida, Tampa.

Robert J. McDermott, PhD, is Professor of Health Education and Public Health, Co-Director, Florida Prevention Research Center, College of Public Health, University of South Florida, Tampa.

David Ortinau, PhD, is Professor of Marketing, College of Business Administration, University of South Florida, Tampa.

The authors have disclosed that they have no significant relationship with, or financial interest in, any commercial companies pertaining to this article.

Collapse Box


Background: The perpetual search for ways to improve pediatric health care quality has resulted in a multitude of assessments and strategies; however, there is little research evidence as to their conditions for maximum effectiveness. A major reason for the lack of evaluation research and successful quality improvement initiatives is the methodological challenge of measuring quality from the parent perspective.

Purpose: Comparison of performance-only and importance–performance models was done to determine the better predictor of pediatric health care quality and more successful method for improving the quality of care provided to children.

Approach: Fourteen pediatric health care centers serving approximately 250,000 patients in 70,000 households in three West Central Florida counties were studied. A cross-sectional design was used to determine the importance and performance of 50 pediatric health care attributes and four global assessments of pediatric health care quality. Exploratory factor analysis revealed five dimensions of care (physician care, access, customer service, timeliness of services, and health care facility). Hierarchical multiple regression compared the performance-only and the importance–performance models. In-depth interviews, participant observations, and a direct cognitive structural analysis identified 50 health care attributes included in a mailed survey to parents(n = 1,030). The tailored design method guided survey development and data collection.

Findings: The importance–performance multiplicative additive model was a better predictor of pediatric health care quality.

Practice Implications: Attribute importance moderates performance and quality, making the importance–performance model superior for measuring and providing a deeper understanding of pediatric health care quality and a better method for improving the quality of care provided to children. Regardless of attribute performance, if the level of attribute importance is not taken into consideration, health care organizations may spend valuable resources targeting the wrong areas for improvement. Consequently, this finding aids in health care quality research and policy decisions on organizational improvement strategies.

Quality of care is a central issue in the ongoing health care debate in the United States and elsewhere (Chou, Chen, Woodward, & Yen, 2005; Drummer, 2007; Larsson, Larsson, & von Holstein, 2005; Vuori, 2007). The International Standardization Organization defines health care quality as the totality of features and characteristics of a product or service that bear on its ability to satisfy stated or implied needs (International Standardization Organization, 1990).

Many scholars recognize that the patient’s perspective of care is an important “consumer lens” that assess quality and develop improvement strategies (McGlynn, 1997; Olsson, Elg, & Lindblad, 2007; Pichert et al., 1998). Reliance on patients’ reports of the care they receive has been a standard method for assessing health care quality for over a decade (Schröder, Larsson, &Alhmström, 2007). Commonly referred to as quality of care from the patient’s perspective (Cleary &Edgman-Levitan, 1997), this research relies on the identification of specific attributes of service delivery that patients use to evaluate their care and determine how well health care services meet or exceed their expectations (van Campen, Sixma, Friele, Kerssens, & Peters, 1995).

Pediatric care is of special concern because widespread deficiencies in preventive and acute care services impact utilization rates and the subsequent health and well-being of children and youth (Newacheck, Hughes, Hung, Wong, & Stoddard, 2000). Parents decide when to seek preventive or curative treatment, select the child’s health care providers, and determine if the child practices healthy behaviors or complies with treatment regimens. These activities are important determinants in a child’s health. Despite the crucial nature of the parents’ perspective, few studies have been identified that examine methodological issues associated with the assessment of pediatric health care quality.

Back to Top | Article Outline

Conceptual Framework

Health services researchers commonly conceptualize patient perspectives of health care quality as an attitude, that is, an overall tendency to respond consistently favorably or unfavorably toward an object (Berry & Parasuraman, 1991; Cronin & Taylor, 1992; Gotlieb, Grewal, & Brown, 1994; Iacobucci, Grayson, & Ostrom, 1994; Lilien, Kotler, & Moorthy, 1992; Oliver, 1997; Taylor, 1995b). An attitude lies in the mind of the individual holding that view Hair, Bush, & Ortinau, 2002). Moreover, multiattribute evaluation models are commonly used to examine and provide a deeper understanding of the formation and structure of attitudes (McDermott &Sarvela, 1999).

Although scholars agree about the importance of understanding the features or attributes of the attitude construct, they have yet to agree on the best way to measure patients’ attitudes about health care quality. Of special interest is a debate between the relative advantages of an additive performance-only model and the multiplicative–additive importance–performance model to determine which approach better predicts health care quality. The additive performance-only model measures quality as an attitude without regard to the importance a person places on the attribute used to assess quality or performance. Advocates of this model suggest the addition of importance confounds the evaluation of what actually takes place in the service experience and better predicts health care quality (Cronin & Taylor, 1992, 1994; DeSarbo, Huff, Rolandelli, & Choi, 1994; Taylor, 1994, 1995a, 1995b; Teas, 1993). In contrast, the multiplicative–additive importance–performance model measures quality as an attitude by including the interaction between people’s affect, that is, their feelings about importance, and their cognitive evaluations, that is, performance. Proponents of this model suggest that the attribute importance moderates or influences the direction and strength of the relationship between performance and quality and, thus, must be included to obtain a valid assessment of health care quality (Carman, 2000; Hair et al., 2002).

This study attempts to offer a better method of measuring and improving pediatric health care quality from the consumers’ perspective by comparing the additive performance-only and multiplicative–additive importance–performance models for their ability to predict parents’ perception of the quality of care provided to their children. The results can inform the development of more responsive metrics for assessing pediatric health care quality by considering attributes parents consider important in measuring it.

Back to Top | Article Outline


Overview of Data Sources

Primary and secondary data were collected. Secondary data included a review of 4,805 existing patient evaluation cards (consisting of five close-ended items and two open-ended items) routinely collected in these health care centers. This review yielded feedback about major or recurring issues, notably long waiting times, and a perceived lack of courtesy among some receptionists in these settings. Observations of eight parent–child dyads interacting with health care providers offered contextually relevant data. Data from 65 in-depth interviews helped to identify attributes of care valued by parents. The completion of written surveys by parents in the appointment waiting area (n = 100) provided direct cognitive structural (DCS) analysis of elements of care they deemed to be important in measuring quality. Items with a mean rating of ≥4.1 on a 5-point scale were extracted to create a pilot survey. Completion of the pilot survey by 100 parents guided removal or modification of items not responded to or not understood. The resulting final survey was mailed to approximately 2,200 households served by 14 pediatric health care centers.

Back to Top | Article Outline
Characteristics of Communities Served

The setting encompassed 14 pediatric health care centers serving approximately 250,000 patients and 70,000 households in three West Central Florida counties. According to 2010 census data (U.S. Census Bureau, 2012) one county was mixed urban–rural in nature (approximately 1,205 persons per square mile, 23.9% under age 18), one was predominantly urban (3,348 persons per square mile, 17.8% under age 18), and one was predominantly rural (approximately 622 persons per square mile, 21.2% under age 18).

Back to Top | Article Outline
Description of Mailed Survey Development

The mailed survey instrument was developed using the tailored design method (Dillman, 2000). This method includes five steps: preliminary survey development, pretesting, survey revision, small pilot study, and implementation of the instrument (Dillman, 2000). In Step 1, a preliminary list of 113 attributes was developed from a review of the literature. Refinement and reduction of these attributes came from 65 in-depth interviews and the 8 participant observations. Final determination and validation of the attributes were derived from the DCS survey. The DCS survey was administered to 100 parents/guardians to assess the factors that are critical in their evaluation of pediatric health care quality and, subsequently, which attributes to include in a larger, follow-up mailed survey. In the DCS, participants were asked to judge each attribute using the following 5-point scale in evaluating pediatric health care quality: 5 = a critical factor, 4 = definitely a factor, 3 = generally a factor, 2 = only somewhat a factor, and 1 = not at all a factor. As previously noted, only items with means of ≥4.1 (n = 50) were included in the mailed survey.

The mailed survey instrument elicited importance and performance ratings for the 50 service attributes, as well as four global quality measures. In addition, the survey included eight sociodemographic variables and one open-ended question. Importance data were obtained using a 6-point rating scale with the response values of 1 = not at all important, 2 = only slightly important, 3 = somewhat important, 4 = important, 5 = definitely important, and 6 = extremely important. Performance ratings were obtained using a 7-point letter grade scale: A+, A, B, C, D, F, and NA (not applicable). This method was chosen because it generates ordinal data that can be treated as interval data, provides cognitive and affective performance data while maintaining a consistent format, and is easily understood by consumers (Hair et al., 2002).

The four global measures for assessing pediatric health care quality included overall quality of pediatric health care, overall quality of your primary care pediatrician, overall quality of the nursing care you receive, and overall quality of the front office staff. They were measured with a 6-point descriptor scale from 1 (no quality at all) to 6 (exceptional quality). Sociodemographic data included father’s employment status, mother’s employment status, education, marital status, age, health insurance, length of time receiving care from this pediatrician, and ethnicity. Lastly, an open-ended question was used to elicit any additional or concluding comments that parents may wish to make.

In Step 2 (pretesting), face-to-face interviews were conducted with a convenience sample of parents to determine appropriateness and clarity. Step 3 (first survey revision) was based on a review of pretest results. Survey items were clarified as needed, and the survey was revised and pretested until it was ready for further evaluation. In Step 4 (pilot study), 100 parents were randomly selected from the caseload to participate in a mailed pilot survey. The pilot test was conducted using the same procedures planned for the main study to estimate overall response rate, estimate individual item response rate, and identify problems that needed to be remedied to maximize respondent understanding. Step 5 (implementation) followed Dillman’s (2000) survey administration process: (a) a prenotice letter was mailed to everyone in the sample; (b) approximately 1 week later, the questionnaire packet was mailed to the entire sample; (c) a week later, a reminder postcard was mailed to the entire sample; (d) 2–4 weeks after mailing the initial packet, a replacement questionnaire was mailed to nonrespondents with a personalized note expressing the importance of obtaining a response; and (e) a letter mailed to nonrespondents asked them one last time to complete and return the survey.

Back to Top | Article Outline
Validity and Reliability

Four methods were employed to assess the questionnaire’s validity. First, face validity was assessed during pretesting. The researcher asked respondents to think out loud while answering questions and probed their answers to identify problems with comprehension of the terms and ensure individual survey items measured what they were intended to measure. Second, content validity was assessed using two methods. The DCS analysis was performed during the initial phase of the survey to assess content validity of the items included on the mail survey. Third, content validity was also assessed by the use of a panel of experts with extensive knowledge in instrument development and health care quality research (McDermott & Sarvela, 1999). Fourth, construct validity was assessed by exploratory factor analysis.

Back to Top | Article Outline
Dimension Development

The 50 pediatric health care quality performance and importance attributes included in the mailed survey were entered together into an exploratory factor analysis. The principal axis factoring method was used for the initial extraction of factors because it provides more accurate estimates of the strength of factor loadings (Gorsuch, 1983). Because of the anticipated correlation between the constructs, an oblique rotation was used. The scree test was used to look for obvious breaks between factors with relatively large eigen values and those with smaller ones. Interpretability criteria were applied to determine the number of factors to be retained. In addition, in interpreting the rotated factor pattern, an item was said to load on a given factor if the loading was ≥.45 or greater for that factor, and <.45 for any other (Hair, Tatham, Anderson, & Black, 1998). Through the factor analysis, five attributes were extracted for the regression analysis.

The 45 items from the factor analysis identified nine variables (dimensions or factors): physician communication, physician competence, front office customer service, nursing staff customer service, physician customer service, partnership building, access, timeliness of services, and health service facility. The structure matrix was also analyzed to determine an item-to-total or item-to-factor correlation. This analysis provided information on the unique contribution that each item had on the factor, similar to the standardized regression coefficients obtained through multiple regression. Internal consistency reliability, the intercorrelation among items on each factor and total scale (0.0–1.0), was measured using Cronbach’s alpha (Cronbach, 1951). For this applied study, a minimum value of .60 was considered acceptable (McDermott & Sarvela, 1999). Items that improved the magnitude of the reliability coefficient by 0.2 or more if they were deleted were removed from the factor. The coefficient of stability was assessed using the test–retest method (McDermott & Sarvela, 1999) by having 21 respondents complete the pilot survey twice using a 2-week interval. An item was considered for removal if the test–retest agreement was <80%. However, the decision to remove an item also depended on its theoretical or potential contribution to a given attribute.

Back to Top | Article Outline
Data Analysis and Dimension Refinement

Analyses were performed using SPSS statistical software. All importance and performance ratings and the global quality assessments were examined for normality. Spearman rank-order correlations were used to assess associations with sociodemographic and behavioral characteristics, performance ratings and importance ratings, and pediatric health care quality. Pearson correlations were used to assess the association between performance and importance ratings to explore for potential confounding and suppressor variables and multicollinearity.

Hierarchical regression was selected to determine whether perceived importance combined with perceived performance provided significantly more explanatory or predictive information about health care quality than the performance model alone. The two models were compared for significant changes in R2 (i.e., amount of variance explained).

Back to Top | Article Outline
Human Subjects Review

The institutional review board of the University of South Florida reviewed and approved the study procedures, interview guide, and survey instruments.

Back to Top | Article Outline


Of the 2,200 survey packets mailed, 207 were undeliverable. Therefore, the names and addresses of those 207 were eliminated from further follow-up. Of the 1,993 remaining households contacted, 1,034 surveys were returned (51.7% response rate). Of the surveys returned, four were blank, leaving 1,030 for analysis. Characteristics of respondents are shown in Table 1.

Table 1
Table 1
Image Tools

When the relationships among the nine performance dimensions were examined, multicollinearity was indeed found among dimensions that could confound subsequent regression analyses. Consequently, the nine dimensions were collapsed into five (physician services, customer service, timeliness of services, health service facility, and access). Regression coefficients (i.e., beta weights) for the additive performance and the multiplicative–additive importance–performance models are shown in Table 2.

Table 2
Table 2
Image Tools

The main effects model included respondent characteristics, performance ratings, and the criterion variable—pediatric health care quality. Respondent characteristics that were significantly correlated with pediatric health care quality were included in the main effects model. These included (a) mother’s employment status, (b) respondent’s level of education, (c) each of the 14 pediatric offices where the care is provided, and (d) number of sick child visits. After entering respondent characteristics, each of the five performance dimensions was entered individually to establish its unique association with pediatric health care quality. As a second test of association, the performance dimensions were entered together into the regression model. All significance tests were two-tailed and based on an alpha of either .01 or .05.

The multiple correlation coefficient (R) for the additive performance-only model was .762, with an R2 of .580. The multiple correlation coefficient (R) for the multiplicative additive importance–performance model was .830, and the R2 was .688. This comparison of the R and R2 of the two models suggests that the multiplicative–additive importance–performance model contributes significantly more to the prediction of pediatric health care quality.

Back to Top | Article Outline

Practice Implications

In the health care setting, there are a number of variables that can be used to judge the quality or adequacy of care (McGlynn, 1997). Arguably, the most important of these is the medical outcome. Favorable outcomes may be influenced by such elements as the availability of state-of-the-art technical facilities and high-level practitioner expertise as well as other aspects of patient care. Although these technical elements influence actual medical outcomes, they are not the sole contributing factors. Perceptions of quality as rated by the patient or as in the present case the parents of the patients who make that judgment may have a profound influence as well, such as on adherence to specific behavioral regimens and participation in follow-up care. Not only has patient satisfaction been related to patient compliance but also to doctor–patient information exchange and overall continuity of care (Smith, Falvo, McKillip, & Pitz, 1984). Physician–patient information exchange that cultivates an atmosphere of mutual participation and shared decision-making increases the probability that patients will follow provider recommendations (Falvo, 2005; Falvo & Tippy, 1988). In turn, improved compliance fosters a better medical outcome and reduces disparities (Chin, Alexander-Young, & Burnet, 2009).

Additional measures of quality may be the ratings assigned by others, including ones assessed formally by companies associated with reimbursement orby quality assurance organizations or accrediting bodies. Other constructs potentially important in rating health care quality also have been advanced (McGlynn, 1997).

In this study, only the second of these aforementioned metrics, the patient’s functional assessment of quality, was measured. Therefore, any directive or interpretative remarks made hereafter must be inferred on the basis of this restricted definition of health care quality. However, this compartmentalization of health care quality is not unusual, and as noted earlier in this article, scholars readily acknowledge the patient’s perspective of care as a means of measuring quality and, consequently, a basis for creating interventions for improvement (Olsson et al., 2007; Pichert et al., 1998). Moreover, McGlynn (1997) concludes that, to some extent, “quality [of care] is in the eye of the beholder” (p. 9).

For the current study, the utilization of the multiplicative–additive model has both theoretical and multiple pragmatic implications. Theoretically, the inclusion of the interaction between importance and performance ratings increased the ability to predict pediatric health care quality. This finding supports the alignment of attitude theory and the conceptualization and operationalization of health care quality as an attitude. Subsequently, the multiplicative–additive model was an effective theoretical framework for finding convergence between pediatric health care service quality and attitude formation and providing a deeper understanding of how consumers form attitudes toward pediatric health care quality.

This model also contributes pragmatically to the prioritization of quality improvement strategies based on the needs of the consumer. Without the prioritization tool of importance, health care organizations may be attempting to improve performance items with little consumer relevance and therefore not improving health care quality from the parent perspective. For example, in the additive model, the health care organization would be tempted to prioritize its improvement strategies by first addressing customer service, then physician services, health services facility, access, and timeliness of services. However, the multiplicative–additive model would suggest prioritizing first customer service, then access, health services facility, physician services, and timeliness of services. The health care organization that fails to provide the services that consumers need may lose out to a competitor who listened better and acted when change was required. Today, health care consumers may be more sophisticated, demanding, and educated than ever before, and often, also possess the option or flexibility to choose their providers from an array of possibilities. Thus, they are becoming more likely “to shop” for physicians with reputations for service excellence as well as medical and technical excellence.

Back to Top | Article Outline

Importance–performance analysis (IPA) has been a popular tool for understanding satisfaction and prioritizing service quality improvements since it was introduced by Martilla and James (1977). It has been applied in a variety of contexts, such as banking (Ennew, Reed, & Binks, 1993), dentistry (Nitse & Bush, 1993), health care (Dolinsky & Caputo, 1990), food service (Aigbedo & Parameswaran, 2004), hospitality (Martin, 1995), and hotels (Asad & Chris, 2005). There has also been a recent surge of new IPA applications. Public and private organizations are using IPA to reposition themselves to be more competitive for tax dollars, grants, and funding through other mechanisms (Hunt, Scott, & Richardson, 2003). In addition, countries wanting to improve their image to increase tourism are using IPA (O’Leary & Deegan, 2005). This study’s findings suggest the superiority of the multiplicative–additive importance–performance model to the performance model in measuring and understanding attitude formation pertinent to pediatric health care quality.

Back to Top | Article Outline

This study has some notable limitations that necessitate consideration. First, all of the parent–respondents in this study were participants in a single pediatric health care provider network that serves a finite geographic area of Florida. An attempt to conclude that these findings would be similar in other pediatric settings or other nonpediatric specialty settings would be tenuous. It is possible that an “organizational effect” could have influenced responses. Future studies comparing these two models might attempt to examine a wider range of pediatric health care provider networks to see whether particular organizational traits can be isolated. Second, potential effects stemming from the geographic area comprising the current study are unknown. This three-county area may or may not be representative of other regions that serve pediatric patients. Third, responses may have been influenced by characteristics specific to the patient, specific to the pediatrician providing care, specific to the provider facility, specific to the health issue being addressed, or interactions involving this entire milieu of variables (Falvo, 2011). Fourth, although this study attempts to examine one aspect of health care quality, certain aspects of health care that may be important to health outcomes include technical elements that patients are not in an informed position to evaluate. There are “health plans and doctors that provide a high level of technical quality but that are not rated highly by patients on humaneness, responsiveness, or satisfaction” (McGlynn, 1997, p. 10).

These limitations notwithstanding having a model for monitoring health care services to ensure that they are patient-centric is valuable from an importance perspective as well as from a performance perspective for pediatric practitioners and facilities. In addition, accountability for this aspect of health care may be best assigned and achieved as a result of considering the recipient of care. Researchers have indicated that the perceived quality of care and effective communication between patients and caregivers can influence such critical intermediate elements as patients’ receptivity to receiving advice, their adherence to treatment regimens, and their satisfaction with care, so that, ultimately, the potential for achieving optimal medical outcomes is influenced as well (Stewart, 1995; Stewart, Meredith, Brown, &Galajda, 2000). Until the unlikely event of developing a valid measure of health care quality that is all-encompassing, providers, patients, and health care management scholars will have to rely on compartmentalizing individual constructs related to health care quality measurement, with all of the necessary interpretative caveats. Although that restriction may be assumed, the current study elevates the case for combining importance and performance measures when assessing health care quality, because used together, they offer feedback from which both patients and the provider organization can benefit.

Back to Top | Article Outline


Aigbedo H., Parasuraman R. (2004). Importance–performance analysis for improving quality of campus food service. The International Journal of Quality and Reliability Management, 21 (8), 876–896.

Asad M., Chris R. (2005). Service quality assessment of 4 star hotels in Darwin, Northern Territory, Australia. Journal of Tourism and Hospitality Management, 12 (1), 25–36.

Berry L., Parasuraman A. (1991). Marketing services: Competing through quality. New York, NY: The Free Press.

Carman, J. (2000). Patient perceptions of service quality: Combining the dimensions. Journal of Management in Medicine, 14(5/6), 339–356.

Chin M. H., Alexander-Young M., Burnet D. L. (2009). Health care quality-improvement approaches to reducing child health disparities. Pediatrics, 124 (Suppl 3), S224–S236.

Chou S., Chen T., Woodward T. B., Yen M. (2005). Using servqual to evaluate quality disconfirmation of nursing services in Taiwan. Journal of Nursing Research, 13 (2), 75–83.

Cleary P. D., Edgman-Levitan S. (1997). Healthcare quality: Incorporating consumer perspectives. Journal of the American Medical Association, 278 (19), 1608–1612.

Cronbach L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297–334.

Cronin J. J., Taylor S. A. (1992). Measuring service quality: A reexamination and extension. Journal of Marketing, 56, 55–68.

Cronin J. J., Taylor S. A. (1994). SERVPERF versus servqual: Reconciling performance-based and perceptions-minus-expectations measurement of service quality. Journal of Marketing, 58, 125–131.

DeSarbo W., Huff L., Rolandelli M., Choi J. (1994). On the measurement of perceived service quality: A conjoint analysis approach. In Rust R., Oliver R. (Eds.), Service quality: New directions in theory and practice (pp. 201–221). Thousand Oaks, CA: Sage Publications.

Dillman D. A. (2000). Mail and internet surveys: The tailored design method, 2nd ed. New York, NY: John Wiley & Sons, Inc.

Dolinsky A. L., Caputo R. K. (1990). The role of healthcare attributes and demographic characteristics in the determination of healthcare satisfaction. Journal of Health Care Marketing, 10 (4), 31–39.

Drummer J. (2007). Health care performance accountability. International Journal of Health Care Quality Assurance, 20 (1), 34–39.

Ennew C. T., Reed G. V., Binks M. R. (1993). Importance performance analysis and the measurement of service quality. European Journal of Marketing, 27 (2), 59–70.

Falvo, D. R. (2005). Your five point plan for more effective patient education. Patient Education Update. Retrieved from

Falvo D. R. (2011). Effective patient education. Burlington, MA: Jones & Bartlett Learning.

Falvo D. R., Tippy P. (1988). Communicating information to patients. Patient satisfaction and adherence as associated with resident skill. Journal of Family Practice, 26 (6), 643–647.

Gorsuch R. L. (1983). Factor analysis. Hillsdale, NJ: Erlbaum.

Gotlieb J. B., Grewal D., Brown S. W. (1994). Consumer satisfaction and perceived quality: Complementary or divergent constructs? Journal of Applied Psychology, 79 (6), 875–885.

Hair J., Bush R., Ortinau D. (2002). Marketing research: Within a changing information environment. New York, NY: Irwin McGraw-Hill Higher Education.

Hair J., Tatham R. L., Anderson R. E., Black W. (1998). Multivariate data analysis, 5th ed. Englewood Cliffs, NJ: Prentice Hall.

Hunt K. S., Scott D., Richardson S. (2003). Positioning park and recreation using importance–performance. Journal of Park and Administration, 21 (3), 1–21.

Iacobucci D., Grayson K., Ostrom A. (1994). The calculus of service quality and customer satisfaction: Theoretical and empirical differentiation and integration. In Swartz T. A., Bowen D. E., Brown S. W. (Eds.), Advances in services marketing and management: Research and practice (pp. 1–67). Greenwich, CT: JAI Press.

International Standardization Organization. (1990). Quality management and quality system elements: Guidelines for services. Geneva, Switzerland: ISO.

Larsson B. W., Larsson G., Chantereau W. M., von Holstein K. S. (2005). International comparisons of patients’ views on quality of care. International Journal of Health Care Quality Assurance, 18 (1), 62–73.

Lilien G., Kotler P., Moorthy S. (1992). Marketing models. Englewood Cliffs, NJ: Prentice-Hall.

Martilla J. A., James J. C. (1977). Importance–performance analysis. Journal of Marketing, 41, 77–79.

Martin D. W. (1995). An importance/performance analysis of service providers’ perception of quality service in the hotel industry. Journal of Hospitality & Leisure Marketing, 3 (1), 5–16.

McDermott R. J., Sarvela P. D. (1999). Health education evaluation and measurement: A practitioner’s perspective, 2nd ed. Madison, WI: WCB/McGraw-Hill.

McGlynn E. A. (1997). Six challenges in measuring the quality of health care. Health Affairs, 16 (3), 7–21.

Newacheck P. W., Hughes D. C., Hung Y., Wong S., Stoddard J. J. (2000). The unmet health needs of America’s children. Pediatrics, 105 (4 Pt 2), 989–997.

Nitse P. S., Bush R. P. (1993). An examination of retail dental practices versus private dental practices using an importance–performance analysis. Health Marketing Quarterly, 11 (1/2), 207–221.

O’Leary S., Deegan J. (2005). Ireland’s image as a tourism destination in France: Attribute importance and performance. Journal of Travel Research, 43, 247–256.

Oliver R. L. (1997). Satisfaction: A behavioral perspective on the consumer. New York, NY: McGraw-Hill.

Olsson J., Elg M., Lindblad S. (2007). System characteristics of healthcare organizations conducting successful improvements. Journal of Health Organization and Management, 21 (3), 283–296.

Pichert J. W., Miller C. S., Hollo A. H., Gauld-Jaeger J., Federspiel C. F., Hickson G. B. (1998). What health professionals can do to identify and resolve patient dissatisfaction. Joint Commission Journal on Quality Improvement, 24, 303–312.

Schröder A., Larsson B. W., Ahlström G. (2007). Quality in psychiatric care: An instrument evaluating patients’ expectations and experiences. International Journal of Health Care Quality Assurance, 20 (2), 141–159.

Smith J. K., Falvo D., McKillip J., Pitz G. (1984). Measuring patient perceptions of the patient-doctor interaction: Development of the PDIS. Evaluation & the Health Professions, 7 (1), 77–84.

Stewart M., Meredith L., Brown J. B., Galajda J. (2000). The influence of older patient-physician communication on health and health-related outcomes. Clinics in Geriatric Medicine, 16 (1), 25–36, vii–viii.

Stewart M. A. (1995). Effective physician-patient communication on health and health-related outcomes: A review. Canadian Medical Association Journal, 152 (9), 1423–1433.

Taylor S. (1994). Waiting for service: The relationship between delays and evaluations of service. Journal of Marketing, 58, 56–69.

Taylor S. (1995a). Service quality and consumer attitudes: Reconciling theory and measurement. In Swartz T., Bowen D., Brown S. (Eds.) Advances in services marketing and management, vol. 4 (p. 136). Greenwich, CT: JAI Press.

Taylor S. (1995b). The effects of filled waiting time and service. In Swartz T., Iacobucci D. (Eds.) Handbook of services marketing and management (pp. 171–188). Thousand Oaks, CA: Sage Publications.

Teas R. K. (1993). Expectations, performance evaluation, and consumers’ perceptions of quality. Journal of Marketing, 57 (4), 18–46.

vanCampen C., Sixma H., Friele R. D., Kerssens J. J., Peters L. (1995). Quality of care and patient satisfaction: A review of measuring instruments. Medical Care Research and Review, 52 (1), 109–133.

Vuori H. (2007). Introducing quality assurance: An exercise in audacity. International Journal of Health Care Quality Assurance, 20 (1), 10–15.


measurement of health care quality; pediatric health care quality; performance-importance

© 2013 Wolters Kluwer Health | Lippincott Williams & Wilkins


Search for Similar Articles
You may search for similar articles that contain these same keywords or you may modify the keyword list to augment your search.