Skip Navigation LinksHome > August 2014 - Volume 52 - Issue 8 > Development of the Primary Care Quality-Homeless (PCQ-H) Ins...
Medical Care:
doi: 10.1097/MLR.0000000000000160
Original Articles

Development of the Primary Care Quality-Homeless (PCQ-H) Instrument: A Practical Survey of Homeless Patients’ Experiences in Primary Care

Kertesz, Stefan G. MD, MSc*; Pollio, David E. PhD; Jones, Richard N. ScD; Steward, Jocelyn MSM§; Stringfellow, Erin J. MSW; Gordon, Adam J. MD, MPH; Johnson, Nancy K. RN, MPH#; Kim, Theresa A. MD**; Daigle, Shanette G. MPH††; Austin, Erika L. PhD#; Young, Alexander S. MD‡‡,§§; Chrystal, Joya G. MSW‡‡,§§; Davis, Lori L. MD∥∥; Roth, David L. PhD¶¶; Holt, Cheryl L. PhD##

Free Access
Supplemental Author Material
Article Outline
Collapse Box

Author Information

*Birmingham VA Medical Center, School of Medicine

Department of Social Work, University of Alabama at Birmingham, Birmingham, AL

Alpert School of Medicine at Brown University, Providence, RI

§University of Alabama at Birmingham School of Health Related Professions, Birmingham, AL

George Warren Brown School of Social Work, Washington University in St. Louis, St. Louis, MO

VA Pittsburgh Health Care System, Center for Health Equity Research and Promotion, University of Pittsburgh School of Medicine, Pittsburgh, PA

#Birmingham VA Medical Center, Birmingham, AL

**Boston University School of Medicine, Boston Health Care for the Homeless Program, Boston, MA

††Department of Veterans Affairs, Birmingham/Atlanta Geriatric Research, Education, and Clinical Center (GRECC), University of Alabama at Birmingham, Birmingham, AL

‡‡VA Desert Pacific Mental Illness Research Education and Clinic Center (MIRECC)

§§Department of Psychiatry, Greater Los Angeles VA Healthcare Center, University of California Los Angeles, Los Angeles, CA

∥∥Tuscaloosa VA Medical Center, Tuscaloosa, AL

¶¶Johns Hopkins University, Center on Aging and Health, Baltimore

##University of Maryland School of Public Health, College Park, MD

Supplemental Digital Content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Website, www.lww-medicalcare.com.

Supported by the US Department of Veterans Affairs, Veterans Health Administration, Health Services Research & Development Branch Award (IAA 07-069-2).

Positions and opinions expressed here are those of the authors and do not represent views of the Department of Veterans Affairs or any other branch of the US government.

The authors declare no conflict of interest.

Reprints: Stefan G. Kertesz, MD, MSc, Division of Preventive Medicine, MT 608, 1720 2nd Ave S., Birmingham, AL 35294. E-mail: skertesz@uabmc.edu.

Collapse Box

Abstract

Background:

Homeless patients face unique challenges in obtaining primary care responsive to their needs and context. Patient experience questionnaires could permit assessment of patient-centered medical homes for this population, but standard instruments may not reflect homeless patients’ priorities and concerns.

Objectives:

This report describes (a) the content and psychometric properties of a new primary care questionnaire for homeless patients; and (b) the methods utilized in its development.

Methods:

Starting with quality-related constructs from the Institute of Medicine, we identified relevant themes by interviewing homeless patients and experts in their care. A multidisciplinary team drafted a preliminary set of 78 items. This was administered to homeless-experienced clients (n=563) across 3 VA facilities and 1 non-VA Health Care for the Homeless Program. Using Item Response Theory, we examined Test Information Function (TIF) curves to eliminate less informative items and devise plausibly distinct subscales.

Results:

The resulting 33-item instrument (Primary Care Quality-Homeless) has 4 subscales: Patient-Clinician Relationship (15 items), Cooperation among Clinicians (3 items), Access/Coordination (11 items), and Homeless-specific Needs (4 items). Evidence for divergent and convergent validity is provided. TIF graphs showed adequate informational value to permit inferences about groups for 3 subscales (Relationship, Cooperation, and Access/Coordination). The 3-item Cooperation subscale had lower informational value (TIF<5) but had good internal consistency (α=0.75) and patients frequently reported problems in this aspect of care.

Conclusions:

Systematic application of qualitative and quantitative methods supported the development of a brief patient-reported questionnaire focused on the primary care of homeless patients and offers guidance for future population-specific instrument development.

On a single winter night in 2013, a total of 610,042 Americans were counted as homeless, including 57,849 US military veterans,1 a number considerably higher when homelessness is counted over the year.2 The vulnerability of homeless individuals is reflected in excess mortality,3–6 hospital utilization,7,8 and poor health.9 Their access to health care is typically poor,10–12 and they often feel unwelcome in care.13 Programmatic efforts to remediate access barriers began with Health Care for the Homeless Programs, first supported by private foundations and then by the US Department of Health and Human Services.14 In recent years, the US Department of Department of Veterans Affairs (VA) initiated 38 homeless-focused primary care programs.15 High-quality primary care for homeless persons could, in principle, ameliorate disparities and produce cost offsets elsewhere (eg, fewer emergency room visits, hospitalizations), and perhaps contribute to the reduction of homelessness.16

Assessing the provision of high-quality primary care for homeless persons faces challenges of operationalization and measurement. Single-disease performance metrics can be problematic in their application to special or multimorbid populations and in situations where the context of care should influence decision making.17–19 Patient-centric approaches to primary care have gained in popularity, including patient-centered medical homes (PCMHs)and the VA’s Patient Aligned Care Teams.20 These changes in care delivery have contributed to increased interest inpatient assessments of care and team-based care,21 and whether care approximates priorities identified by expert consensus groups [ie, Institute of Medicine (IOM)]. Relatively little is known about homeless patients’ perceptions of key aspects of care such as accessibility, continuity, coordination, principles enshrined in the Consumer Assessment of Health Plans (CAHPS)22 and the Primary Care Assessment Survey (PCAS).23

Administration of the CAHPS with PCMH items is required of federal Health Care for the Homeless programs seeking PCMH status, and CAHPS items are now used within VA’s Survey of Health Experiences of Patients.23–25 These surveys are potentially problematic in application to homeless patients. The CAHPS presents 43 questions (1012 words) at a ninth grade reading level.26 Twelve items are used to implement skips among the remaining 31 items, and 7 different response sets are used. For clients who are ill rested or cognitively impaired, the risk of error or overload may be high. Questions may presuppose conditions and expectations that may not apply. More pressingly, specific concerns and aspirations important to homeless patients are likely to differ from the concepts queried in standard instruments, including the pressure to balance health care against competing demands,27 perceptions of being unwelcome or adversely judged,13,28,29 mutual mistrust, and other unique constraints.30

These concerns spurred development of a patient-reported primary care assessment instrument specifically designed to assess homeless patients’ experiences in primary care, applicable in VA and non-VA settings alike. The purposes of this report was 2-fold: to portray the process and psychometrics supporting a new survey tool focused on primary care for homeless individuals; and to provide a portrait of the combined qualitative and quantitative procedures that can support the development of patient-reported care surveys for patient populations with unique concerns and needs.

Back to Top | Article Outline

METHODS

The method of instrument development proceeded from 3 assumptions about measurement of patient care in a homeless population. First, general constructs relevant to quality ought to derive from the IOM’s definition of primary care31 and its Rules for Quality.32 This approach was embraced by the PCAS23 and the Primary Care Assessment Tool.33 Second, homeless patients’ needs and concerns are unique, requiring qualitative inquiry to guide item development.13,34 Third, to validate the results from these assumptions, the final instrument had to demonstrate adequate psychometric properties. On the basis of these assumptions, we sought to develop an instrument that was both sensitive to homeless patients’ concerns and practical for administration in resource-constrained clinical settings such as federally qualified health centers and volunteer clinics, in addition to more standard primary care and research contexts. The specific steps involved in the development of the instrument are detailed below and outlined graphically in Supplemental Digital Content Figure 1, Supplemental Digital Content 1 http://links.lww.com/MLR/A742.

Back to Top | Article Outline
Preliminary Identification of Constructs

Two reports from the IOM were used to preliminarily identify 16 constructs potentially appropriate for inquiry, with the expectation that these constructs would form the basis of subscales in the final instrument. These included the IoM’s “10 Rules for Quality”32 and elements from the IoM’s definition of primary care.31 They include general concepts such as care being accessible and characterized by evidence-based decision making.

Back to Top | Article Outline
Prioritization of Constructs for Inclusion

A card sort ranking exercise was used to narrow the 16 preliminary constructs to 8, a number addressable in qualitative interviews, described elsewhere.35 Briefly, each of the constructs was restated in simple declarative form (eg, the IoM priority of accessibility was “Primary care should be easy to get”). Patients (n=26) from homeless service settings and experts in homeless health care (n=10) were asked to sort the cards with their highest priority at top (Supplemental Digital Content Table 1, Supplementary Digital Content 2 http://links.lww.com/MLR/A743 provides the working definition for each of the 8 constructs emergent from this exercise).

Back to Top | Article Outline
Qualitative Interviews

On the basis of the 8 prioritized constructs, semistructured qualitative interviews were used to identify key themes for question content, supplemented by 4 focus groups to confirm themes related to unanticipated constructs that emerged from the interviews. Patient interviewees were recruited from a non-VA Health Care for the Homeless Program (n=20)36 and from a VA hospital (n=16). In addition, 24 interviews were obtained from homeless care provider/experts (clinicians, administrators, and homeless researchers) from North America. Recruitment intentionally balanced veteran-focused with nonveteran-focused interviewees and frontline clinicians with researchers and program leaders. While patient-level interviews occurred in person, experts were often interviewed by telephone.

The interviews used a semistructured guide that differed only slightly in the questions for patients and provider/experts. The typical qualitative interview prompt offered a brief, plain English restatement of the construct of interest using open-ended language to encourage new interpretations, including unanticipated constructs that might emerge. Thus a query related to “Care Based on Medical Evidence” was:What do you think about the idea that your primary care should be based on the best medical knowledge?

Two follow-up probes were:What makes you say that?How about times when you didn’t have a regular place to live? Did/does that make it different?

Interviewers were trained by a team of 2 experienced faculty (authors C.H. and D.E.P.), including video-taped mock interviews, performance analysis, and feedback. Interviews were digitally recorded and transcribed. For both the qualitative interviews and the administration of Primary Care Quality-Homeless (PCQ-H) version 1.0 reported below, participants underwent a structured informed consent with modest remuneration. All procedures were approved by Institutional Review Boards at all facilities involved with the study.

Back to Top | Article Outline
Qualitative Analysis

Interviews were coded for themes to guide survey item development. The coding approach, Template Analysis,37,38 begins with identification of concepts within the investigators’ a priori framework (in this case the IOM constructs).39,40 Coders worked independently only after achieving interrater reliability of ≥75%. For each overarching construct, 3–8 themes emerged inductively, often with subsidiary subthemes. Three entirely new constructs emerged as well (trust, Homeless-specific Needs, substance abuse/mental illness), producing a total of 11 constructs of interest for the anticipated survey. We reviewed all proposed themes, organizing and refining until consensus was reached.

Back to Top | Article Outline
Item Generation

For each of the 11 constructs, 18–50 items were drafted based on our review of the most evocative qualitative interview quotes pertinent to each construct. All items were reviewed by a multidisciplinary team, with backgrounds in homeless primary care delivery, social work, psychology, nursing, medicine, and survey design. A consensus voting process prioritized 7–8 items per construct to provide a workable number for testing. The resulting 78 items underwent cognitive interviewing (n=12) to identify item interpretation problems. Only slight wording changes resulted from this exercise.

Back to Top | Article Outline
Administration for Psychometric Analysis and Validation

The preliminary PCQ-H (version 1.0) included 78 items for 11 constructs (Table 1). To simplify administration in resource-poor environments with low-literacy populations, the survey avoided skips and applied a uniform 4-point Likert-type response (Strongly Agree to Strongly Disagree, permitting “I don’t know/no response” as an option). The survey was administered to 563 persons who used services at Health Care for Homeless Veterans programs at 3 VA facilities and 1 non-VA Health Care for the Homeless Program. Recruitment is detailed separately but summarized here.41 Eligibility was restricted to persons who had recorded evidence of past or current homelessness and 2 or more visits to a primary care provider in the past 2 years.

TABLE 1
TABLE 1
Image Tools

Two questionnaires were administered for divergent/convergent validity analyses. For divergent validity we projected weak or no correlation between the PCQ-H and scores from a construct that might represent a “rival hypothesis” for response patterns obtained; for this we used a short measure of distressing psychiatric symptoms validated in a large national homeless sample, the Colorado Symptom Index42 (the putative rival hypothesis: persons with psychological distress will give less favorable reports of primary care). For convergent validity, we used the PCAS, subject to the single-dimension scoring approach published by Roumie and colleagues (α=0.93).23,43

Back to Top | Article Outline
Item Selection

The results from survey administration to a 563-person sample supported an item reduction exercise to minimize length while exploring the statistical correlation of the 11 subscales. The intent of this process was to retain items that were informative across the range of the latent construct (a more or less favorable view of care with respect to the applicable subscale), and to avoid burdening respondents with correlated but fundamentally redundant items (“bloated specific” scales44).

The steps to accomplish included a preliminary confirmatory factor analysis, but all subsequent work based on Item Response Theory (IRT). IRT offers advantages in shortening scales by separating consideration of informational value from item “difficulty”, that is, the informational position along the dimension under study. Although IRT pertains to a family of models, they share a common set of assumptions termed unidimensionality and local independence.45Unidimensionality presumes that item responses are dependent upon a single underlying dimension common to all items in the test. Local independence assumes that conditional upon the underlying common trait, item responses should be uncorrelated. Although local dependence and multidimensionality (ie, violations of the core assumptions) are related, it is possible that pairs of items could be correlated after controlling for the underlying trait, for example, through redundancy in content. However, this dependence is not sufficient or not shared among enough items to appear as multidimensionality (ie, it is local to the item pair).

In our analysis, we approached the unidimensionality assumption by first subjecting our items to the preliminary confirmatory factor analysis, then performing IRT models on subsets of items identified as loading on individual factors. We approached local dependence by carefully reviewing preliminary IRT model results and scrutinizing items with high loadings and overlapping item characteristic curves for redundancy in content, removing items with high redundancy.

The item reduction process was as follows:

First, 5 items were dropped where >10% of respondents could not answer, leaving 73 items. Second, for a preliminary assessment of scale correlation, confirmatory factor analysis was applied to all 73 items with factor loading based on the 11 hypothesized subscales/constructs, treating the items as ordered categories, using the Mplus default WLSMV estimation method.46 Although correlation between subscales is typical for patient questionnaires,23,24,47 to avoid excessive correlation subscales were merged until no pairwise correlation exceeded 0.8 (resulting in 4 subscales). However, subsequent work on the PCQ-H was not based on factor analysis. IRT (2 parameter graded response analysis48) was applied to identify items optimally discriminating across the range of the latent trait. The model permits calculation of the informational value for each item, relative to the inferred construct. The modeling accounts for both discrimination (the strength of the association between the item and the construct) and location (where along the spectrum of the construct an item is most informative). The informational value of a collection of items is presented as a Test Information Function (TIF) curve, with the x-axis representing variation in the latent construct and the y-axis the informational value of a set of items. It was hypothesized that a minimum test information value of 5 permits inferences about groups, and 10 permits inferences about individuals (analogous to reliability of 0.8 and 0.9, respectively).49 This portion of the analysis was conducted using the ltm package of the R programming environment.50

Fourth, at each step in IRT analysis, TIF curves were reviewed, with focus on TIF values in the active range (θ=−1.5 to +1.5). Fifth, automated algorithms were applied to retain items of maximum informational value, leaving 14 items. Because the resulting TIF curves consistently fell <5, items were reinserted based on criteria. These included (a) retaining items with unfavorable responses from >13% of respondents (a cutpoint selected empirically after reviewing the range of unfavorable percentages); (b) retaining the 2 items most informative for each of original 11 constructs; and (c) removal of 3 items that were verbally and statistically redundant with other items.

Sixth, to assure that the internal consistency of the PCQ-H subscales could be compared with other published instruments,51,52 the commonly-used Cronbach α was computed for each subscale, where an optimum of 0.7 or 0.8 is considered desirable for inferences concerning groups.49 Because α’s known limitations,53 we also report McDonald ωt.54 Finally, in light of prior studies suggesting that a single higher-order factor often fits response patterns obtained from patient experience questionnaires,43,47,55 (permitting a single overall score with plausible fit to date43), a single scale solution of all items was checked for adequacy.

Back to Top | Article Outline

RESULTS

Construct Selection

From the card sort exercise (step 2 above), the 8 most highly rated constructs targeted for qualitative interviews were: Accountability, Integration/Coordination, Evidence-based Decision-Making, Accessibility, Patient as the Source of Control, Cooperation among Clinicians, Continuous Healing Relationships, and Shared Knowledge, and the Free Flow of Information. In addition, 3 novel constructs emerged from qualitative interviews (Homeless-specific Needs, Trust/Respect, Substance Abuse/Mental Illness). From themes emergent within these 11 constructs, 78 items were included in the development version of the PCQ-H.

Back to Top | Article Outline
Sample Characteristics

The sample of 563 persons administered the development version was racially diverse (58% black, 31% white, 12% other) with 14% women (Table 1). Military service was common (71%); 65% of respondents were recruited from VA settings. Although all respondents had a history of homelessness, few had slept in shelters or on the streets in the preceding14 days (12%). However, 65% had prior homelessness exceeding 1 year. All had ongoing primary care, with duration of care >2 years for 56% of respondents.

Back to Top | Article Outline
Psychometric Validation

Serial merging of highly correlated scales and subsequent respecification resulted in 4 subscales that served as the basis for the subsequent item response analysis. The decision to merge hypothesized subscales where correlations were very high (r>0.8) meant that these 4 subscales varied in the number of retained items (Table 2). For example, the Patient-Clinician Relationship subscale (a combination of 7 hypothesized subscales) had 15 items, whereas the subscale reflecting perceptions of Cooperation among caregivers (derived from 1 hypothesized subscale) had 3 items.

TABLE 2
TABLE 2
Image Tools

Given that survey items used a 4-point response scale, the mean scores (Table 2) reflect a general tendency for patients to tend toward favorable as opposed to unfavorable ratings, a pattern reported with most other primary care instruments.52,56 The pairwise correlations among the 4 retained subscales (Table 3) remained substantial (r=0.51–0.78), although not different from the benchmark CAHPS Adult Core Survey.24

TABLE 3
TABLE 3
Image Tools

The “active” range for a TIF curve refers to the area where most respondents fall with respect to the modeled trait θ (in our sample, typically −1.5≤θ≤1.5). TIF curves (Figs. 1A–D) show that the informational value mostly exceeded the desired optimum of 5 in the active range for the Relationship, Access/Coordination and Homeless-specific Needs Subscales. Peaks and valleys in the TIF curves indicated that for each subscale, the variation in item responses was more informative at certain locations with respect to the modeled trait θ. Very broadly this reflected greater informational precision (and firmer inferences) where θ was frankly low and frankly high, and less information for middling levels of θ.

FIGURE 1
FIGURE 1
Image Tools

The initial TIF curve for Cooperation met the study criterion of TIF>5, but suffered from very redundant items (eg, “My primary care and other health care providers are working together to come up with a plan to meet my needs” and “My health care is better because my primary care and other health care providers work together”). Three of the 6 items were dropped due to this semantic redundancy (with extremely correlated responses, r>0.7). The resulting subscale was not highly correlated with the other 3, and obtained high levels of dissatisfaction (Table 2). The TIF curve fell below the optimum of 5, but Cronbach α was acceptable at 0.75.

Within each subscale, internal consistency estimates were relatively high (Cronbach α=0.92, 0.75, 0.87, 0.76 for Relationship, Cooperation, Access/Coordination, Homeless-specific Needs), and α=0.96 for single 33-item summative scale.

For a single factor solution, based on all 33 retained items, the TIF curve exceeded 20 across the active range (image not shown). All TIF curves fell below the criterion of 5 at the extremes, where very few respondents were located.

Convergent validity was robust. The overall PCQ-H score correlated with Roumie and colleague’s single factor-derived score for the PCAS (r=0.73, P<0.001).23,43 There was extremely modest inverse correlation between psychiatric distress (Colorado Symptom Index) and overall PCQ-H score (r=−0.13, P=0.002), supporting divergent validity.

The final PCQ-H instrument included 33 items, with a seventh grade reading level (694 words).

Back to Top | Article Outline

DISCUSSION

Challenges such as poor accessibility,57 uncoordinated care,58 and feeling unwelcome13 in care are not unique to persons who are homeless, but they are often crucial barriers to appropriate care. With increasing interest in population-tailored service delivery models, the PCQ-H instrument should resonate for persons who have experienced homelessness. Questions regarding accessibility, for example, ask about outreach services, walking in for care (as opposed to telephoning for care, emphasized in the industry standard CAHPS), and payment barriers. Questions regarding Patient-Clinician Relationship query matters of control, trust, respect, and perceptions of competence.

A substantial conceptual strength of our PCQ-H instrument lies in a development process that integrates 2 divergent survey development traditions that can be termed deductive (top down) and inductive (bottom up) approaches. Specifically, foundational surveys (including Primary Care Assessment Tool and PCAS23,33) started with principles laid out by the IoM (including the notion that primary care be integrated, accessible, and continuous)31 followed by expert question design, subject later to cognitive response interviews, focus groups, and patient testing. While covering a range of expert-defined domains, this approach risks missing or underemphasizing constructs of concern to particular populations.

A contrasting “bottom up” tradition begins with qualitative inquiry among patients, exemplified by the Homeless Satisfaction with Care Scale (HSCS).59 The HSCS team first queried “satisfaction” qualitatively. The resultant HSCS emphasizes respect, stigma and trust, although it does not query many experiential domains named by the IoM (eg, continuity, coordination, cooperation). The PCQ-H, like the CAHPS, strives to query patient experiences (rather than the HSCS’s “satisfaction”60). However, as with the HSCS, qualitative inquiry from patients determined what would be queried.

The resulting 33-item PCQ-H attained criteria for convergent and divergent validity. In addition, criterion validity is suggested by the finding that PCQ-H scores are higher in settings that tailor primary care service design to meet the needs of homeless patients.41 The informational value for each subscale varies. If one adheres to the optimum standards for IRT analysis, inferences about groups (TIF>5) can be made for all but the Cooperation subscale. Informational performance is strong enough to permit inference about individuals for the Relationship subscale and for the overall PCQ-H score (TIF>10).

A potential limitation is that the 3-item Cooperation subscale fell short of the optimum TIF>5 threshold. However, the α of 0.75 is higher than that reported for 3 of the 5 scales finalized in the CAHPS 2.0 Adult Survey,24 higher than most α’s computed for the CAHPS PCMHs instrument (0.61, 0.62, 0.68, 0.74, 0.85, and 0.91),61 and within the range of those reported for the PCAS (which ranged from 0.74 to 0.95).

Finally, the PCQ-H queries concepts described by patients and provider/experts through an extensive interview process. For example, questions about accessibility incorporate items focused on ease of walking in for care and expectations of outreach. Issues related to mental health and addiction issues, which featured prominently in our qualitative interviews, are queried through items designed to elicit concerns that are common in this population (eg, fear of discrimination) while using language that does not require self-report of actually having a mental or addictive disorder.

Federal and state-level support for credentialing PCMH models within entities such as federally qualified health centers (including Health Care for the Homeless programs)62 makes this instrument a potential asset to such initiatives. The PCQ-H, at 694 words in length with a seventh grade reading level (Flesch-Kinkaid), is shorter and easier to read than the CAHPS Adult Survey with PCMH items. Internal consistency estimates (α) were higher than or similar to those published for the CAHPS adult core survey24 and the clinician and group visit survey.51 In 1 clinical setting where both PCQ-H and CAHPS have been used for nonresearch purposes (with roughly 200 patients responding to each), the PCQ-H was described by patients as straightforward, whereas the CAHPS necessitated frequent questions from patients unsure of how to respond (C. Leon, personal oral communication, 2013).

Limitations to the PCQ-H and its development should be acknowledged. First, our reliance on 3 VA samples and a health care program from a state with universal Medicaid limited our capacity to carefully test item performance in relation to financial accessibility. To assure that the resulting instrument would remain applicable in settings with financial barriers, some items related to financial accessibility were retained. In addition, although the instrument met study criteria for validity, the stability of response over time remains unclear, pending a formal test-retest assessment.

Acknowledging these limitations, certain unique strengths apply to the instrument development process as well. Most notably, although the PCQ-H was designed to capture domains prioritized in IoM consensus reports, item creation was uniformly preceded by systematic qualitative inquiry with homeless-experienced patients and providers.

We believe the PCQ-H should serve as an asset to care providers and payers wishing to assure that organizations funded to care for homeless patients tailor services for this population. Absent an appropriate patient-reported measure, it will remain possible for agencies to secure homeless health care funding without optimizing accessibility (as has been reported12) or other dimensions important to the homeless. One question for future homeless health care design is whether patient ratings of their own care predict better process or outcome measures or more contextually appropriate decisions.63 Pending such research, however, a strong case can be made that measuring homeless patients’ experiences aligns with a societal interest in fostering medical homes for all populations.

Note: An English-language copy of the Primary Care Quality Homeless-33 is provided as Supplemental Digital Content to this article, Supplemental Digital Content 3, http://links.lww.com/MLR/A744, along with an automatic scoring worksheet in Excel format. Supplemental Digital Content 4, http://links.lww.com/MLR/A745. A Spanish-language certified copy is available on request from the authors.

Back to Top | Article Outline

ACKNOWLEDGMENTS

The authors are obligated both to acknowledge their funding (VA Health Services Research and Development) and a VA disclaimer to indicate that the views are their own and not those of the Federal Government.

The authors gratefully acknowledge the contributions of Allison Borden, Bridgett Alday, John Andrew Young, Calvin Elam, Sonia Schwartz, Lori Henault, John D. Harding, Jr, Stephen R. Henry, Dawn L. Glover, and Jorge Avila.

Back to Top | Article Outline

REFERENCES

1. . Office of Planning and Community Development .The 2013 Annual Homeless Assessment Report (AHAR) to Congress, Part 1: Point-in-Time Estimates of Homelessness. 2013 .Washington, DC:United States Department of Housing and Urban Development.

2. Burt MR, Aron LY, Lee E, et al .Helping America’s Homeless: Emergency shelter or affordable housing?. 2001 .Washington, DC:The Urban Institute Press.

3. Beijer U, Wolf A, Fazel S .Prevalence of tuberculosis, hepatitis C virus, and HIV in homeless people: a systematic review and meta-analysis.Lancet Infect Dis. 2012; 12:859–870.

4. Hwang SW .Mortality among men using homeless shelters in Toronto, Ontario.JAMA. 2000; 283:2152–2157.

5. Hwang SW, Orav EJ, O’Connell JJ, et al .Causes of death in homeless adults in Boston.Ann Intern Med. 1997; 126:625–628.

6. Hibbs JR, Benner L, Klugman L, et al .Mortality in a cohort of homeless adults in Philadelphia.N Engl J Med. 1994; 331:304–309.

7. Buck DS, Brown CA, Mortensen K, et al .Comparing homeless and domiciled patients’ utilization of the harris county, Texas public hospital system.J Health Care Poor Underserved. 2012; 23:1660–1670.

8. Salit SA, Kuhn EM, Hartz AJ, et al .Hospitalization costs associated with homelessness in New York City.N Engl J Med. 1998; 338:1734–1740.

9. Gelberg L, Linn LS .Assessing the physical health of homeless adults.JAMA. 1989; 262:1973–1979.

10. Kushel MB, Vittingoff E, Haas JS .Factors associated with the health care utilization of homeless persons.JAMA. 2001; 285:200–206.

11. Baggett TP, O’Connell JJ, Singer DE, et al .The unmet health care needs of homeless adults: a national study.Am J Public Health. 2010; 100:1326–1333.

12. Kertesz SG, McNeil W, Cash JJ, et al .Unmet need for medical care and safety net accessibility among Birmingham’s homeless.J Urban Health. 2014; 91:33–45.

13. Wen CK, Hudak PL, Hwang SW .Homeless people’s perceptions of welcomeness and unwelcomeness in healthcare encounters.J Gen Intern Med. 2007; 22:1011–1017.

14. Vladeck BC .Health care and the homeless: a political parable for our time.J Health Polit Policy Law. 1990; 15:305–317.

15. O’Toole TP, Bourgault C, Johnson EE, et al .New to care: demands on a health system when homeless veterans are enrolled in a medical home model.Am J Public Health. 2013; 103:suppl 2 S374–S379.

16. Han B, Wells BL .Inappropriate emergency department visits and use of the Health Care for the Homeless Program services by Homeless adults in the northeastern United States.J Public Health Manag Pract. 2003; 9:530–537.

17. Boyd CM, Darer J, Boult C, et al .Clinical practice guidelines and quality of care for older patients with multiple comorbid diseases: implications for pay for performance.JAMA. 2005; 294:716–724.

18. Durso SC .Using clinical guidelines designed for older adults with diabetes mellitus and complex health status.JAMA. 2006; 295:1935–1940.

19. Weiner SJ .Contextualizing medical decisions to individualize care: lessons from the qualitative sciences.J Gen Intern Med. 2004; 19:281–285.

20. True G, Butler AE, Lamparska BG, et al .Open access in the patient-centered medical home: lessons from the Veterans Health Administration.J Gen Intern Med. 2013; 28:539–545.

21. Gerteis M, Edgman-Levitan S, Daley J, et al .Through the Patient’s Eyes: Understanding and Promoting Patient-Centered Care. Paperback ed. 1993 .San Francisco:Jossey-Bass.

22. Crofton C, Lubalin JS, Darby C .Consumer Assessment of Health Plans Study (CAHPS). Foreword.Med Care. 1999; 37:MS1–MS9.

23. Safran DG, Kosinski M, Tarlov AR, et al .The primary care assessment survey: tests of data quality and measurement performance.Med Care. 1998; 36:728–739.

24. Hargraves JL, Hays RD, Cleary PD .Psychometric properties of the Consumer Assessment of Health Plans Study (CAHPS) 2.0 adult core survey.Health Serv Res. 2003; 38:1509–1527.

25. Campbell SM, Braspenning J, Hutchinson A, et al .Research methods used in developing and applying quality indicators in primary care.BMJ. 2003; 326:816–819.

26. . Agency for Healthcare Research and Quality .CAHPS Clinician & Group Surveys: 12-Month Survey with Patient Centered Medical Home (PCMH) Items. 2012 .Washington, DC:United States Department of Health and Human Services.

27. Gelberg L, Gallagher TC, Andersen RM, et al .Competing priorities as a barrier to medical care among homeless adults in Los Angeles.Am J Public Health. 1997; 87:217–220.

28. Ensign J, Panke A .Barriers and bridges to care: voices of homeless female adolescent youth in Seattle, Washington, USA.J Adv Nurs. 2002; 37:166–172.

29. Merrill JO, Rhodes LA, Deyo RA, et al .Mutual mistrust in the medical care of drug users: the keys to the “narc” cabinet.J Gen Intern Med. 2002; 17:327–333.

30. Shortt SE, Hwang S, Stuart H, et al .Delivering primary care to homeless persons: a policy analysis approach to evaluating the options.Healthcare Policy. 2008; 4:108–122.

31. . Committee on the Future of Primary Care for the Institute of Medicine .Primary Care: America’s Health in a New Era. 1996 .Washington, DC:National Academy Press.

32. . Committee on Quality of Health Care in America IoM .Crossing the Quality Chasm: A New Health System for the 21st Century. 2001 .Washington, DC:National Academy Press.

33. Shi L, Starfield B, Xu J .Validating the adult Primary Care Assessment Tool.J Fam Pract. 2001; 50:161–175.

34. Sofaer S .Qualitative methods: what are they and why use them? Health Serv Res. 1999; 34:1101–1118.

35. Steward JL, Holt CL, Pollio DE, et al .Priorities in the primary care of persons experiencing homelessness: convergence and divergence in the views of patients and provider/experts.

Under Review, 2013


36. O’Connell JJ, Oppenheimer SC, Judge CM, et al .The Boston Health Care for the Homeless Program: a public health framework.Am J Public Health. 2010; 100:1400–1408.

37. King N. Symon G, Cassell C .Doing template analysis.Qualitative Organizational Research: Core Methods and Current Challenges. 2012 .Los Angeles; London:Sage; 426–450.

38. http://hhs.hud.ac.uk/w2/research/template_analysis/

Template Analysis—What is Template Analysis. 2004. Available at: http://hhs.hud.ac.uk/w2/research/template_analysis/. Accessed June 16, 2013


39. Bradley EH, Curry LA, Devers KJ .Qualitative data analysis for health services research: developing taxonomy, themes, and theory.Health Serv Res. 2007; 42:1758–1772.

40. Miles MB, Huberman AM .Qualitative Data Analysis: An Expanded Sourcebook. 1994; :2nd ed.Thousand Oaks:Sage Inc..

41. Kertesz SG, Holt CL, Steward JL, et al .Comparing homeless persons’ care experiences in tailored versus nontailored primary care programs.Am J Public Health. 2013; 103:suppl 2 S331–S339.

42. Conrad KJ, Yagelka JR, Matters MD, et al .Reliability and validity of a modified Colorado Symptom Index in a national homeless sample.Ment Health Serv Res. 2001; 3:141–153.

43. Roumie CL, Greevy R, Wallston KA, et al .Patient centered primary care is associated with patient hypertension medication adherence.J Behav Med. 2011; 34:244–253.

44. Boyle GJ .Does item homogeneity indicate internal consistency or item redundancy in psychometric scales? Pers Indiv Dif. 1991; 12:291–294.

45. DeMars C .Item Response Theory. 2010 .Oxford; New York:Oxford University Press.

46. Muthén LK, Muthén BO .Mplus User’s Guide. 1998-2010; :6th ed.Los Angeles, CA:Muthén & Muthén.

47. Marshall GN, Hays RD, Sherbourne CD, et al .The structure of patient satisfaction with outpatient medical care.Psychol Asses. 1993; 5:477–483.

48. Samejima F. Van der Linden W, Hambleton RK .Graded response model.Handbook of Item Response Theory. 2010 .New York:Springer-Verlag.

49. Nunnally JC, Bernstein IH .Psychometric Theory. 1994; :3rd ed..New York:McGraw-Hill.

50. Rizopoulos D .ltm: an R package for latent variable modeling and item response analysis.J Stat Software. 2006; 17:

[Online article]


51. Dyer N, Sorra JS, Smith SA, et al .Psychometric properties of the Consumer Assessment of Healthcare Providers and Systems (CAHPS(R)) Clinician and Group Adult Visit Survey.Med Care. 2012; suppl:S28–S34.

52. Hays RD, Shaul JA, Williams VS, et al .Psychometric properties of the CAHPS 1.0 survey measures. Consumer Assessment of Health Plans Study.Med Care. 1999; 37:MS22–MS31.

53. Revelle W, Zinbarg R .Coefficients alpha, beta, omega, and the glb: comments on Sijtsma.Psychometrika. 2009; 74:145–154.

54. McDonald RP .Test Theory: A Unified Treatment. 1999 .Mahwah, NJ:L. Erlbaum Associates.

55. Reise S, Morizot J, Hays R .The role of the bifactor model in resolving dimensionality issues in health outcomes measures.Qual Life Res. 2007; 16:Suppl 1 19–31.

56. Ware JE Jr, Davies-Avery A, Stewart AL .The measurement and meaning of patient satisfaction.Health Med Care Serv Rev. 1978; 1:3–15.

57. Hwang SW, Ueng JJ, Chiu S, et al .Universal health insurance and health care access for homeless persons.Am J Public Health. 2010; 100:1454–1461.

58. Blue-Howells J, McGuire J, Nakashima J .Co-location of health care services for homeless veterans: a case study of innovation in program implementation.Soc Work Health Care. 2008; 47:219–231.

59. Macnee CL, McCabe S .Satisfaction with care among homeless patients: development and testing of a measure.J Community Health Nurs. 2004; 21:167–178.

60. Sofaer S, Firminger K .Patient perceptions of the quality of health services.Annu Rev Public Health. 2005; 26:513–559.

61. Scholle SH, Vuong O, Ding L, et al .Development of and field test results for the CAHPS PCMH Survey.Med Care. 2012; suppl:S2–S10.

62. . Health Resources and Services Administration .Program Assistance Letter: HRSA Patient-Centered Medical/Health Home Initiative. 2011 .Washington, DC:Health Resources and Services Administration.

63. Weiner SJ, Schwartz A, Weaver F, et al .Contextual errors and failures in individualizing patient care: a multicenter study.Ann Intern Med. 2010; 153:69–75.

Keywords:

patient satisfaction; item response theory; patient-centered care; survey methodology; patient-centered outcomes research; homeless persons; homeless health care

Copyright © 2014 by Lippincott Williams & Wilkins

Login

Article Tools

Images

Share

Search for Similar Articles
You may search for similar articles that contain these same keywords or you may modify the keyword list to augment your search.