According to the American Cancer Society,1 more than 9 million Americans with a history of cancer are undergoing cancer treatment. Approximately 60% of these cancer patients are expected to survive at least 5 years, and most will have to contend with short- and long-term consequences of cancer and its treatment. Disease- or treatment-related symptoms negatively affect patients’ functional status and quality of life2–4 and even have the potential to decrease rates of locoregional tumor control and survival.5,6 The development of effective and consistent symptom assessment is critical to not only understanding patients’ experiences, but also providing vital information to monitor cancer treatment and guide clinical care to prevent or minimize untoward outcomes of cancer care.
Clinicians assume the primary responsibility in the surveillance and evaluation of symptoms associated with disease and treatment,7,8 whereas substantial increases in the use of patient-reported outcome (PRO) measures in cancer clinical trials have occurred from a series of recommendations and standards promulgated by professional societies and regulatory agencies.9,10 However, evidence shows discrepancies between clinicians’ and patients’ symptom assessments,11,12 and these inconsistencies can potentially lead to the lack of recognition of symptom experiences and ineffective treatment plans.
To fully appreciate the ways in which clinician-observed symptoms differ from PROs, it is important to examine research that compares symptom assessments and quantifies the discrepancies. This article presents an integrative review of studies contrasting patient-reported symptoms to clinician-observed symptoms in oncology care. As the basis for comparison, 4 dimensions of symptoms are evaluated,13 including incidence (frequency of symptom occurrence), severity (strength of the symptom being experienced), distress (the degree to which the person is bothered by a symptom), and duration. Additionally, the comparison involves symptoms with and without an observable component and the relationships of symptoms to clinical outcomes.
A literature search was conducted in PubMed, CINAHL, and the Cochrane Database, using the following key words: symptom or toxicity, and patient-reported, patient-rated, patient-assessed, or patient-evaluated, which were combined with cancer, oncology, neoplasm, or tumor. The search was limited to articles published in English, within the time frame from 1950 to 2011. One hundred sixty-seven abstracts were identified from this way for the initial review. Secondary references of additional 43 articles were found by carefully reviewing reference lists and bibliographies from primary source and citations listed as “relevant” in PubMed. Selection was based on the following inclusion and exclusion criteria. Any empirical data–based publication comparing patient-reported symptoms and clinician-observed toxicities in oncology was included. Publications that examined only either patient-reported symptoms or clinician-observed toxicity were excluded, along with duplicate articles. Titles and abstracts served as the primary information source to evaluate eligibility for inclusion (Figure).
A total of 36 empirical articles (Table) were identified after excluding 147 articles that did not meet criteria for inclusion and 27 duplicated articles, and the publication dates of these articles ranged from 1989 to 2011. Among these articles, 15 used longitudinal research designs, and 2 used retrospective designs. Study populations varied with 21 studies that included samples with diagnosis of any type of cancer; 4 with only patients with prostate cancer; 3 with just patients with breast cancer, lung cancer, or head and neck cancer; and another 4 studies with cervical cancer, lung and genitourinary cancers, colorectal cancer, or non-Hodgkin lymphoma or Hodgkin disease, respectively.
Comparison of Symptom Incidence
Clinicians often underestimate rates of symptoms, when compared with rates reported by patients.12,14–22 For example, Jarernsiripornkul and colleagues16 found that medical records of 103 patients within 1 year after receiving treatment only documented 162 (22.6%) of the total 716 symptoms reported by patients using a body system-based questionnaire. The underreported prevalence of symptoms by clinicians has also been demonstrated on follow-up at 5 years after treatment. Vistad et al18 investigated 147 cervical cancer survivors more than 5 years after pelvic radiotherapy. The 5-year prevalence rates of physician-assessed intestinal, bladder, and vaginal morbidity at grade 3–4 were 15%, 13%, and 23%, respectively, whereas the prevalence of patient-rated symptoms from these organ sites at similar severity levels was much higher: intestinal, 45%; bladder, 23%; and vaginal, 58%.
The magnitude of disagreements between patients and clinicians has not always been quantified by measures of agreement such as κ coefficients. Two studies did show low to moderate agreements between patients’ and clinicians’ symptom assessments, with κ coefficients ranging from 0.15 to 0.60,12,18 which were also noted in studies summarized in the previous paragraph. However, 2 other investigations documented a better degree of harmony with κ coefficients varying from 0.26 to 1.00 (only 1 symptom had a κ <0.40) and medians 0.71 and 0.85, respectively.23,24 Authors from 1 study attributed the reasons for high levels of agreement to the study design and the use of a standard measure, the European Organization for the Research and Treatment of Cancer Quality of Life Questionnaire23; however, some questioned whether the design and instrument contributed to better congruence because the same questionnaire was used but with lower κ values across symptoms.12
Several reasons might explain the different magnitudes of disagreements measured by κ coefficients. One possibility may be due to the types of clinicians who interview or assess patients. In studies that had high agreements, nurses interviewed patients, whereas in studies with the low κ coefficients, physicians evaluated patients. Clearly, more data are needed to detect the variance between nurses’ and physicians’ symptom assessments and the conditions under which nurses tend to perform better in their evaluations of patients’ symptoms. Nurses are likely to have more frequent encounters and spend longer periods with patients across healthcare settings. Another potential explanation is related to the interrater variability among clinicians.25 There might be less interrater variability among studies with high κ values, because only 4 or 5 trained nurses interviewed patients,23,24 compared with physician-evaluated studies yielding low κ coefficients when more physicians assessed patients. Unfortunately, the exact numbers of physicians were not always specified in the publications, although 1 study did mention that 151 patient-physician pairs were used.12
Comparison of Symptom Severity
Compared with patients, clinicians show a tendency to rate symptoms with less severity.11,26–30 Nekolaichuk and colleagues27 examined 49 patients with advanced cancer and found that average physician ratings were generally lower than those of patients, with mean differences ranging from 10 to18 mm on a 100-mm visual analog scale. The percentage of cases underestimating symptom severity by clinicians has also been reported. Research by Watkins-Bruner et al30 involving 24 patients with locally advanced prostate cancer found that 13% of the symptoms, evaluated by patients as a 2 or 3 on the Functional Assessment Cancer Therapy, were in fact assigned ratings of 0 by their medical professionals on the Radiation Therapy Oncology Group toxicity rating scales. This trend was further confirmed in a large sample study involving more than 700 patients, where physicians misjudged the severity of symptoms by assigning reduced severity compared with patients on 5127 pairs (doctor/patient) (15%) of single symptom assessments.28
Other investigations have shown nonsignificant differences between clinician-observed and patient-reported symptom severity. For instance, a longitudinal study with 212 patients showed that the average mucositis severity reported by patients and clinicians was similar, but statistical analyses were not presented to confirm this observation.31 Another study with 150 patients did not find statistically significant differences between patient and physician administration on the International Prostate Symptom Score questionnaire. However, because of the difficulty in understanding the questionnaire, 178 patients were dropped from the study, which might have biased study findings.2 Overall, the evidence generated from these investigations is not strong enough to support similar response profiles between clinician-observed and patient-reported symptom severity.
Correlations between patients’ and clinicians’ symptom severity assessments are generally low to moderate. Correlation parameters provide an indication of not only the strength of the linear relationship, but also how consistently pairs of raters rank-ordered symptoms. Reported correlation coefficients have been in the range of 0.01 to 0.76.12,27,32 The lowest correlation coefficients (0.01–0.1) for congruence of symptom ratings were predominantly related to anxiety and depression,27 which meant that ratings of these symptoms were highly inconsistent between patients and clinicians. Although moderate correlation coefficients indicate more consistency among pairs of raters, they do not necessarily imply good agreement. For example, clinicians may consistently rate symptoms lower than patients, but there may still be acceptable correlation coefficients between clinicians’ and patients’ assessment. As a result, studies of congruence must be cautiously interpreted to appreciate the full impact that clinician ratings may have on patient care.
It might be assumed that the agreement between patients and clinicians increases over time due to greater familiarity with symptom presentations associated with multiple episodes of care. However, this is not always the case, and 3 longitudinal studies have verified that changes in the level of agreement do not improve over time. Stephens and colleagues28 compared the change in symptom agreement reported between pretreatment and over 12 months after treatment for more than 700 patients treated with lung cancer. The percentages of agreement (agreement meant that the severity levels of “not at all, a little, moderately, or very much” rated by patients were the same as rated by clinicians) were similar among 6 symptom assessment times, and likewise the percentages of underestimation or overestimation remained consistent. Nekolaichuk and associates27 measured 49 patients with advanced cancer on 2 separate occasions within 11 days of admission, approximately 7 days apart, and observed that the accuracy of assessments did not improve over time. More recently, Petersen and colleagues12 corroborated these findings over a 13-week symptom assessment period. The discrepancy of symptom measurement between patients and healthcare providers seems to persist over time during the course of clinical care.
The trend of discordance has been examined in terms of different symptom severity levels reported by patients. Stephens and colleagues28 validated that when the patient recorded symptoms “not at all,” the doctor agreed 93% of the time, but when the patient recorded “a little,” “moderately,” or “very much,” the level of agreement was only 52%, 44%, and 39%, respectively. Another investigation involving 1109 cancer patients compared the recognition of depression between nurses and patients and reported a similar tendency.33 Better agreement between nurses and patients was observed when patients reported little or no depressive symptoms, whereas the agreement was concordant only 29% and 14% of the time in the mild and moderate/severe ranges, respectively.33 Hence, the higher the severity of symptoms, the more discrepancies there are between patients’ and clinicians’ symptom severity assessments.
Comparison of Symptom Distress
Two studies in particular sought to analyze differences for symptom distress. In an investigation of 30 patients with advanced cancer during the last week of life, a questionnaire assessing distress for 13 symptoms was administered to both patients and physicians.34 Physicians tended to underrate the distress for all 13 symptoms and failed to rate 3 of the most distressful symptoms, fatigue, cachexia, and anorexia, in agreement with patients (κ < 0.4). A survey of 86 cancer patients receiving palliative care noted that nurses had a tendency to rate symptom distress as slightly less severe as patients,7 and the correlation coefficients between nurses’ and patients’ assessments were generally low (ranging from 0.30 to 0.61). These findings demonstrate a concerning level of disagreement between clinicians and patients and also the underappreciation of symptom distress by clinicians.
Comparison of Symptom Duration
Patients seem to report symptoms earlier than clinicians do during the care process. When describing mouth and throat soreness for 212 patients with non-Hodgkin lymphoma or Hodgkin disease, patients reported the mouth and throat soreness onset, peak, and resolution 1 to 3 days earlier than clinicians did.31 This observation is attributed to the Sonis’35 hypothesis that patients’ sense of deterioration in oral mucosa may be earlier than clinicians’ recognition of visible signs of mucositis. The same holds true for patients’ ability to note a single symptom as well as multiple symptoms before clinicians. Basch and coinvestigators36 conducted a longitudinal study of 163 patients with lung cancer and were able to validate with graphical depictions that patients reported multiple symptoms earlier than clinicians did. This lag time in recognizing symptoms has significant implications for delaying early detection of adverse events and effective symptom interventions.
Comparison Between Symptoms With and Without an Observable Component
There are some fascinating findings revealing varying degrees of agreement across symptoms with and without an observable component. The poorest agreements are often observed for symptoms without an observable component, such as symptoms of anxiety and depression,3,4,12,27,32,37–39 fatigue,4,11,40 dyspnea,11,39 or cognitive functioning,32 whereas the best agreements seem to occur with symptoms that can be directly observed, including nausea/vomiting,4,11,12,38,40 diarrhea,11 constipation,12,38,40 or other physical symptoms.32,38,39 Notably, only around 10% of patients’ classified emotional problems or fatigue was identified as such by their physicians3,4; conversely, the agreement for nausea/vomiting was as high as greater than 80%.4,11 These inconsistencies have been further supported by lower κ or nonsignificant correlation coefficients on emotional symptoms but higher κ or significant correlation coefficients on physical symptoms between patients’ and clinicians’ assessment.37,38 Such discrepancies serve as a reminder for clinicians to be cautious in measuring and interpreting those symptoms that cannot be directly observed or assessed without information from patients.
Interestingly, the trend for discrepancies, the poorest level of agreements for symptoms without an observable component and the best level of agreements for those with an observable component, is not so apparent with hospice and palliative care. For instance, Kutner and colleagues7 investigated 86 patients and their nurses and noted little difference in terms of symptoms with/without observable components. These authors argued that healthcare professionals with hospice and palliative care training were more astute in identifying and quantifying psychosocial symptoms in their patients. However, previously mentioned studies conducted in palliative care settings were not so convincing for showing better congruence among psychosocial symptoms between patients and clinicians.12,27,38 Different symptom assessment tools used and experience of clinicians in palliative care could explain these inconsistencies. Future research on patient and clinician congruence with symptom assessments should better characterize the context of palliative care and the specialty focus of healthcare providers.
Comparison of Clinical Outcomes Related to Symptoms
Symptoms reported by patients and clinicians are differentially related to patient’s clinical outcomes. A study investigated 6 common symptoms for patients with lung cancer and compared the relationships of these symptoms recorded by patients themselves and clinicians to clinical outcomes.36 Symptoms reported by patients were more associated with daily health status, such as quality of life, than those assessed by clinicians, whereas symptoms assessed by clinicians were more predictive of mortality and emergency room admissions than those reported by patients. Because this study used similar symptom measurements (items from the National Cancer Institute’s Common Terminology Criteria for Adverse Events) by both patients and clinicians, it validates that indeed patients’ and clinicians’ perspectives on symptoms yield different determinants of clinical outcomes. Recently, a large study involving 2279 cancer patients found either patient or clinician symptom scoring could contribute individually and positively to the overall survival prognostication, and including both scores provided a statistically significant gain in predictive accuracy for survival, compared with including only clinician scores.40 Although this study used different symptom measurement tools, the findings support that patient and clinician symptom reporting may contribute differently to clinical outcomes.
Possible Reasons for Disagreement Between Patients and Clinicians
Both general and more specific reasons are likely to explain the discrepancies between patient-reported and clinician-observed symptoms. General issues include communication and gender. Previous studies have shown that the better communication between patients and clinicians, the more symptoms are reported by patients.5,6 Communication may influence clinicians’ perception of patients’ symptom as well. For instance, oncologists were 2 to 5 times more likely to be aware of patients’ depression when they knew more about their patients.4 Gender also plays a role in symptom detection. Werner and colleagues3 observed that physicians detected more symptoms among females than among males. However, physicians were more likely to disagree with female patients in terms of their psychological symptom etiology.41 The lack of sufficient data makes it difficult to draw definitive conclusions regarding the interactions of patient characteristics and situational factors in accounting for the differences between patient and clinician symptom assessments.
Several possible reasons exist as to clinicians’ underestimating symptom incidences. One has to do with “selective reporting” by either patients or clinicians. Some patients report only symptoms that they consider “relevant” to their disease and treatment,16,23 or they actually alter their responses to be more consistent with what they believe their physicians expect rather than the true state of their condition.42 Sometimes, clinicians fail to acknowledge and record all clinical information provided by their patients.16 Specifically, physicians might be more likely to deny than affirm the possibility of a connection between symptoms and a drug adverse effect, even for symptoms that show strong literature-based evidence for their associations with certain drugs.43 Physicians also tend to report only the more serious symptoms36 and may overlook problems or symptoms that are not obvious or explicitly mentioned by patients.12
SEVERITY, DISTRESS, AND DURATION
There are also possible explanations for the disagreement between patients and clinicians’ assessment around symptom severity, distress, and time frame. The main reason might be the sensitivity of the assessment. Clinicians are less sensitive to detecting patients’ symptom severity change than are patients themselves.26,44 This less sensitivity might be due to different symptom evaluation foundations: clinicians evaluate severity based on their wide clinical experience of impressions comparing across patients, whereas patients’ estimates of severity are grounded in their own experiences.28,44 Moreover, the work demands of busy clinical practices often leave little time for clinicians to discuss patients’ symptom severity, distress, and starting time.45
SYMPTOMS WITH OR WITHOUT OBSERVABLE COMPONENTS
“Selective reporting” also accounts for discrepancies among symptoms with or without observable components and is especially problematic for psychological symptoms that are often difficult to measure if patients are not forthcoming with their emotions. Generally, clinicians’ perceptions of patients’ symptoms depend on information obtained from patients. There is evidence to show that patients have a strong bias toward limiting time with healthcare providers for addressing emotional symptoms, and those who are more anxious or depressed can be less likely to disclose their concerns to clinicians.46 This “selective reporting” can increase the likelihood of leading to differences of opinions by clinicians as to how patients are really doing and their psychological state. Additionally, the lack of experience of oncologists in assessing psychological symptoms and the therapeutic inertia and uncertainty about symptom management may contribute to the discrepancy.3,47
SYMPTOMS’ RELATIONSHIP WITH CLINICAL OUTCOMES
Various interpretations on symptom assessments by patients and clinicians may ultimately lead to different relationships of patient-reported and clinician-observed symptoms on clinical outcomes. Patients report symptoms based on their daily life experiences and the degree to which these cause problems or suffering in normal function.13 In contrast, clinicians use their clinical judgment and tend to focus on symptoms more related to unfavorable clinical outcomes.36 Consequently, symptoms that are prioritized by patients and clinicians are different, and symptoms’ relationship with and contribution to clinical outcomes is varied as well.
Limitations in Literature Search
Admittedly, challenges existed in identifying and retrieving all relevant research on clinician-patient agreement with symptom reports. Although search strategies focused on specific studies using methodologies to evaluate concordance or discordance with symptom ratings, it is recognized that not all clinical studies addressing rater concordance as part of validation for outcomes use MeSH (medical subject headings) terms indicative of this measurement strategy. As such, it was virtually impossible to examine the extensive number of cancer clinical studies to determine if PROs and clinician symptom reporting were compared.
Limitations in Reviewed Publications
Several methodological issues have implications for studying the patient and clinician agreement with symptom ratings and the validity of findings. The major limitations of publications relate to issues in the manner in which data were collected, the selection of questionnaires and measures, and the sampling methods. The table of evidence in this article outlines many of the inherent flaws in study methods.
Confounders during the data collection process are observed across studies. First, the timing of symptom assessments by patients and clinicians varied, and symptom reports were not always obtained simultaneously, but at different time points or intervals.18,38 Data collected at dissimilar periods potentially influence the degree of agreement as the frequency and severity of symptoms fluctuate over time. Second, instructions for using symptom measurement tools differed between patient and clinician versions. For example, the recall period was different between clinicians and patients: clinicians might have been asked to indicate symptom severity “since the last assessment,” when patients were expected to report how they had been feeling “over the past week.”28 Instructions on response tasks were not always provided to record either the greatest severity or the average severity over the time frame, so patients and clinicians may not have interpreted measures in the same way.19,30,34,39,48–51 Third, sometimes patients had more opportunities to report their symptom experiences than did clinicians.27,44 Patients reported symptom on a daily basis, whereas clinicians recorded symptom on a weekly basis.44 These inconsistencies in data reporting across episodes of care limit the interpretation of the agreement levels.
Problems in the selection of appropriate tools to evaluate the concordance are noted. Researchers often failed to include the results of psychometric tests to ensure acceptable reliability and validity of the measures.11,14,15,17,19,21,29,34,45,51,52 In some studies, the psychometrics properties were presented for patient-reported questionnaires, but not for the same questionnaires reported by clinicians. Given that clinicians might have interpreted items differently than patients, tests for reliability and validity should have been conducted on the clinician version of the instrument. Furthermore, questionnaires used by patients differed from those used by clinicians.14,17–22,26,28,30–33,37,44,48–52 Different questionnaires have different recall periods and response tasks, which can lead to variations in how symptoms were conceptualized and grounded. All of these factors complicated the interpretations of rater concordance and, to some degree, compromised the scientific integrity of studies and confidence in the results.
Limited sample sizes and issues in sampling methods pose other study limitations. Eight studies had sample sizes ranging from 19 to 37; 3 investigations had participants of around 50 (Table). Smaller samples restrict the inclusion of patients with diverse characteristics, and the lack of large representative samples precludes any ability to conduct subset analyses with respect to age and race—all of which can have profound effects on symptom reporting and hamper the external validity of results. In many cases, sample size calculations were not described, which hampered the understanding of whether the study had enough power to detect statistically significant differences between patient’s and clinician’s symptom assessments. Moreover, convenient sample selection used for the majority of studies raised concerns regarding sample bias.
Implications for Future Studies and Practice
Implications for Future Studies
Because of the importance of symptoms as an end point for cancer treatment evaluation and the limitations in publications, future studies on comparisons of symptom measurements between patients and clinicians are warranted. Based on current literature, the directions for additional research may fall into 2 categories. The first one is to continue to improve the quality of the comparison study, which is essential to address whether the concordance/nonconcordance is real or is just an artifact from research designs. The second is to explore the potential of integrating patients’ perspectives into clinicians’ symptom evaluation, because symptoms evaluated by patients and clinicians may associate with different outcomes, such as mortality and survival. Extra efforts in the 2 categories will provide valuable guidelines for clinical trials in terms of using PROs as primary or secondary end points.
Future studies must address methodological issues and limitations of existing research to determine the truth behind the concordance/nonconcordance. For instance, a specified and consistent time frame or point for symptom assessments should be set for both patients and clinicians. A same questionnaire or at least variant versions for both patients and clinicians are necessary for more meaningful comparisons of symptom assessments. Using symptom measures with acceptable psychometrics establishes a more credible basis for determining concordance versus nonconcordance between ratings. Controlling the interrater variability and using larger sample sizes are other issues important for future studies. Addressing these methodological limitations has significant implications for generating the true symptom experience and measuring the efficacy or effectiveness of treatment strategies.
Further examining the value of combining patients’ and clinicians’ symptom measurements is necessary. Current literature has shown that patient-assessed symptoms appear to be more predictive of overall quality of life, whereas clinician-evaluated symptoms are more related to mortality and emergency room admissions.36 However, this finding is based on only 1 study involving 163 patients with lung cancer and in a single institute. Additional studies with patients in various diagnoses and in multiple sites are warranted. Another investigation has demonstrated promising results that patient-reported symptoms along with clinician-observed toxicities contributed independently to overall cancer survival.40 Although this is a large study with more than 2000 patients, it might be overpowered because any study with more than 1000 subjects is more likely to detect small effects. Overall, only 2 studies36,40 have been found to explore the potential of integrating patients’ perspectives into clinicians’ evaluation; future research is needed to provide more evidence.
Implications for Practice
Because of the discrepancy between patients and clinicians in symptom assessments, it is of value to involve patients, not only clinicians, in clinical practice to increase the accuracy of symptom measurement. This value has been further augmented as both PRO data and clinician-observed data have shown important but distinctive roles in predicting outcomes, such as quality of life, mortality, emergency room admissions, or cancer survival. Basch and colleagues53 have proposed a promising way of integrating patient’s perspectives with clinician’s: collaborative—symptoms reported by patients are sent to clinicians directly to inform clinicians’ symptom reporting, and this way has been implied in a phase 2 clinical trial. Furthermore, because of the variance regarding the measurements of symptoms with/without observable components, future clinical guidelines might consider different weights for symptoms. For instance, more weights could be placed on PRO data for symptoms without observable components, whereas more weights could be addressed on clinician-observed data for symptoms with observable components.
The ability to measure symptoms related to cancer and its treatment is an essential foundation for cancer nursing practice. Based on this review, it is clear that clinicians have the propensity to underestimate the incidence, severity, or distress of symptoms experienced by cancer patients. These discrepancies appear to be consistently demonstrated over time and become even more apparent when symptoms are more severe and distressing to patients. In addition, patients report symptom onsets earlier than clinicians do. Thus, PROs provide the necessary and accurate insight into patients’ own symptom experiences, and their use should be incorporated into routine clinical practice and not just research studies. Different and independent predictive characteristics of patients’ and clinicians’ symptom assessments further warrant the importance of integrating the patient’s own perspectives into the clinicians’ symptom evaluation. Clearly, more research is needed to address methodological limitations and weaknesses of existing literature and to further examine the symptom assessment approaches that can understand disease burden most and benefit patients ultimately.
1. Cancer Facts & Figures. Atlanta, GA: American Cancer Society; 2008.
2. Cam K, Akman Y, Cicekci B, Senel F, Erol A. Mode of administration of international prostate symptom score in patients with lower urinary tract symptoms: physician vs self. Prostate Cancer Prostatic Dis. 2004; 7 (1): 41–44.
4. Newell S, Sanson-Fisher RW, Girgis A, Bonaventura A. How well do medical oncologists’ perceptions reflect their patients’ reported physical and psychosocial problems? Data from a survey of five oncologists. Cancer. 1998; 83 (8): 1640–1651.
5. Tan AS, Bourgoin A, Gray SW, Armstrong K, Hornik RC. How does patient-clinician information engagement influence self-reported cancer-related problems?: Findings from a longitudinal analysis. Cancer. 2011; 117 (11): 2569–2576.
6. Takeuchi EE, Keding A, Awad N, et al. Impact of patient-reported outcomes in oncology: a longitudinal analysis of patient-physician communication. J Clin Oncol. 2011; 29 (21): 2910–2917.
7. Kutner JS, Bryant LL, Beaty BL, Fairclough DL. Symptom distress and quality-of-life assessment at the end of life: the role of proxy response. J Pain Symptom Manage. 2006; 32 (4): 300–310.
8. Sprangers MA, Aaronson NK. The role of health care providers and significant others in evaluating the quality of life of patients with chronic disease: a review. J Clin Epidemiol. 1992; 45 (7): 743–760.
9. Acquadro C, Berzon R, Dubois D, et al. Incorporating the patient’s perspective into drug development and communication: an ad hoc task force report of the Patient-Reported Outcomes (PRO) Harmonization Group meeting at the Food and Drug Administration, February 16, 2001. Value Health. 2003; 6 (5): 522–531.
10. Guidance for industry: patient-reported outcome measures: use in medical product development to support labeling claims: draft guidance. Health Qual Life Outcomes. 2006; 4: 79.
11. Basch E, Iasonos A, McDonough T, et al. Patient versus clinician symptom reporting using the National Cancer Institute Common Terminology Criteria for Adverse Events: results of a questionnaire-based study. Lancet Oncol. 2006; 7 (11): 903–909.
12. Petersen MA, Larsen H, Pedersen L, Sonne N, Groenvold M. Assessing health-related quality of life in palliative care: comparing patient and physician assessments. Eur J Cancer. 2006; 42 (8): 1159–1166.
13. Lenz ER, Pugh LC, Milligan RA, Gift A, Suppe F. The middle-range theory of unpleasant symptoms: an update. ANS Adv Nurs Sci. 1997; 19 (3): 14–27.
14. Bernhard J, Maibach R, Thurlimann B, Sessa C, Aapro MS Swiss Group for Clinical Cancer R. Patients’ estimation of overall treatment burden: why not ask the obvious? J Clin Oncol. 2002; 20 (1): 65–72.
15. Cirillo M, Venturini M, Ciccarelli L, Coati F, Bortolami O, Verlato G. Clinician versus nurse symptom reporting using the National Cancer Institute–Common Terminology Criteria for Adverse Events during chemotherapy: results of a comparison based on patient’s self-reported questionnaire. Ann Oncol. 2009; 20 (12): 1929–1935.
16. Jarernsiripornkul N, Krska J, Capps PA, Richards RM, Lee A. Patient reporting of potential adverse drug reactions: a methodological study. Br J Clin Pharmacol. 2002; 53 (3): 318–325.
17. Litwin MS, Lubeck DP, Henning JM, Carroll PR. Differences in urologist and patient assessments of health related quality of life in men with prostate cancer: results of the CaPSURE database. J Urol. 1998; 159 (6): 1988–1992.
18. Vistad I, Cvancarova M, Fossa SD, Kristensen GB. Postradiotherapy morbidity in long-term survivors after locally advanced cervical cancer: how well do physicians’ assessments agree with those of their patients? Int J Radiat Oncol Biol Physics. 2008; 71 (5): 1335–1342.
19. Erazo Valle A, Wisniewski T, Figueroa Vadillo JI, Burke TA, Martinez Corona R. Incidence of chemotherapy-induced nausea and vomiting in Mexico: healthcare provider predictions versus observed. Curr Med Res Opin. 2006; 22 (12): 2403–2410.
20. Glimelius B, Hoffman K, Olafsdottir M, Pahlman L, Sjoden PO, Wennberg A. Quality of life during cytostatic therapy for advanced symptomatic colorectal carcinoma: a randomized comparison of two regimens. Eur J Cancer Clin Oncol. 1989; 25 (5): 829–835.
21. Grunberg SM, Deuson RR, Mavros P, et al. Incidence of chemotherapy-induced nausea and emesis after modern antiemetics. Cancer. 2004; 100 (10): 2261–2268.
22. Liau CT, Chu NM, Liu HE, Deuson R, Lien J, Chen JS. Incidence of chemotherapy-induced nausea and vomiting in Taiwan: physicians’ and nurses’ estimation vs. patients’ reported outcomes. Support Care Cancer. 2005; 13 (5): 277–286.
23. Groenvold M, Klee MC, Sprangers MA, Aaronson NK. Validation of the EORTC QLQ-C30 quality of life questionnaire through combined qualitative and quantitative assessment of patient-observer agreement. J Clin Epidemiol. 1997; 50 (4): 441–450.
24. Fortner B, Baldwin S, Schwartzberg L, Houts AC. Validation of the Cancer Care Monitor items for physical symptoms and treatment side effects using expert oncology nurse evaluation. J Pain Symptom Manage. 2006; 31 (3): 207–214.
26. Jensen K, Bonde Jensen A, Grau C. The relationship between observer-based toxicity scoring and patient assessed symptom severity after treatment for head and neck cancer. A correlative cross sectional study of the DAHANCA toxicity scoring system and the EORTC Quality of Life Questionnaires. Radiother Oncol. 2006; 78 (3): 298–305.
27. Nekolaichuk CL, Bruera E, Spachynski K, MacEachern T, Hanson J, Maguire TO. A comparison of patient and proxy symptom assessments in advanced cancer patients. Palliat Med. 1999; 13 (4): 311–323.
28. Stephens RJ, Hopwood P, Girling DJ, Machin D. Randomized trials with quality of life endpoints: are doctors’ ratings of patients’ physical symptoms interchangeable with patients’ self-ratings? Qual Life Res. 1997; 6 (3): 225–236.
29. Stout R, Barber P, Burt P, et al. Clinical and quality of life outcomes in the first United Kingdom randomized trial of endobronchial brachytherapy (intraluminal radiotherapy) vs. external beam radiotherapy in the palliative treatment of inoperable non-small cell lung cancer. Radiother Oncol. 2000; 56 (3): 323–327.
30. Watkins-Bruner D, Scott C, Lawton C, et al. RTOG’s first quality of life study–RTOG 90-20: a phase II trial of external beam radiation with etanidazole for locally advanced prostate cancer. Int J Radiat Oncol Biol Physics. 1995; 33 (4): 901–906.
31. Stiff PJ, Emmanouilides C, Bensinger WI, et al. Palifermin reduces patient-reported mouth and throat soreness and improves patient functioning in the hematopoietic stem-cell transplantation setting. J Clin Oncol. 2006; 24 (33): 5186–5193.
32. Luoma ML, Hakamies-Blomqvist L, Sjostrom J, et al. Physical performance, toxicity, and quality of life as assessed by the physician and the patient. Acta Oncol. 2002; 41 (1): 44–49.
33. McDonald MV, Passik SD, Dugan W, Rosenfeld B, Theobald DE, Edgerton S. Nurses’ recognition of depression in their patients with cancer. Oncol Nurs Forum. 1999; 26 (3): 593–599.
34. Oi-Ling K, Man-Wah DT, Kam-Hung DN. Symptom distress as rated by advanced cancer patients, caregivers and physicians in the last week of life. Palliat Med. 2005; 19 (3): 228–233.
35. Sonis ST. The pathobiology of mucositis. Nat Rev Cancer. 2004; 4 (4): 277–284.
36. Basch E, Jia X, Heller G, et al. Adverse symptom event reporting by patients vs clinicians: relationships with clinical outcomes. J Natl Cancer Inst. 2009; 101 (23): 1624–1632.
37. D’Antonio LL, Long SA, Zimmerman GJ, Peterman AH, Petti GH, Chonkich GD. Relationship between quality of life and depression in patients with head and neck cancer. Laryngoscope. 1998; 108 (6): 806–811.
38. Horton R. Differences in assessment of symptoms and quality of life between patients with advanced cancer and their specialist palliative care nurses in a home care setting. Palliat Med. 2002; 16 (6): 488–494.
39. Nekolaichuk CL, Maguire TO, SuarezAlmazor M, Rogers WT, Bruera E. Assessing the reliability of patient, nurse, and family caregiver symptom ratings in hospitalized advanced cancer patients. J Clin Oncol. 1999; 17 (11): 3621–3630.
40. Quinten C, Maringwa J, Gotay CC, et al. Patient self-reports of symptoms and clinician ratings as predictors of overall cancer survival. J Natl Cancer Inst. 2011; 103 (24): 1851–1858.
41. Greer J, Halgin R. Predictors of physician-patient agreement on symptom etiology in primary care. Psychosom Med. 2006; 68 (2): 277–282.
42. Galer BS, Schwartz L, Turner JA. Do patient and physician expectations predict response to pain-relieving procedures? Clin J Pain. 1997; 13 (4): 348–351.
43. Golomb BA, McGraw JJ, Evans MA, Dimsdale JE. Physician response to patient reports of adverse drug effects: implications for patient-targeted adverse effect surveillance. Drug Saf. 2007; 30 (8): 669–675.
44. Sarna L, Swann S, Langer C, et al. Clinically meaningful differences in patient-reported outcomes with amifostine in combination with chemoradiation for locally advanced non–small-cell lung cancer: an analysis of RTOG 9801. Int J Radiat Oncol Biol Physics. 2008; 72 (5): 1378–1384.
45. Mulders M, Vingerhoets A, Breed W. The impact of cancer and chemotherapy: perceptual similarities and differences between cancer patients, nurses and physicians. Eur J Oncol Nurs. 2008; 12 (2): 97–102.
46. Heaven CM, Maguire P. Disclosure of concerns by hospice patients and their identification by nurses. Palliat Med. 1997; 11 (4): 283–290.
47. Guthrie B, Inkster M, Fahey T. Tackling therapeutic inertia: role of treatment data in quality indicators. Br Med J. 2007; 335 (7619): 542–544.
48. Downie FP, Mar Fan HG, HouedeTchen N, Yi Q, Tannock IF. Cognitive function, fatigue, and menopausal symptoms in breast cancer patients receiving adjuvant chemotherapy: evaluation with patient interview after formal assessment. Psychooncology. 2006; 15 (10): 921–930.
49. Fromme EK, Eilers KM, Mori M, Hsieh YC, Beer TM. How accurate is clinician reporting of chemotherapy adverse effects? A comparison with patient-reported symptoms from the Quality-of-Life Questionnaire C30. J Clin Oncol. 2004; 22 (17): 3485–3490.
50. Jensen K, Lambertsen K, Torkov P, Dahl M, Jensen AB, Grau C. Patient assessed symptoms are poor predictors of objective findings. Results from a cross sectional study in patients treated with radiotherapy for pharyngeal cancer. Acta Oncol. 2007; 46 (8): 1159–1168.
51. Kwok HC, Morton RP, Chaplin JM, McIvor NP, Sillars HA. Quality of life after parotid and temporal bone surgery for cancer. Laryngoscope. 2002; 112 (5): 820–833.
52. Wei JT, Montie JE. Comparison of patients’ and physicians’ ratings of urinary incontinence following radical prostatectomy. Semin Urol Oncol. 2000; 18 (1): 76–80.
53. Basch E, Somerfield MR, Partridge A, Schnipper L, Lyman GH. Commentary: should cost and comparative value of treatments be considered in clinical practice guidelines? J Oncol Pract. 2011; 7 (6): 398–401.