There are over 700 000 prevalent cases of end-stage renal disease (ESRD) in the United States and approximately 100 000 new incident cases yearly.1 An overwhelming body of research has demonstrated that kidney transplantation is superior to dialysis due to the longer and higher quality life it imparts,1-3 but most ESRD patients still remain on dialysis. Patients remaining on dialysis and patients presenting for transplant are uninformed of their option for transplant,4 have suboptimal levels of transplant knowledge, or believe myths about transplant, due to a lack of accurate information about its benefits and risks.5,6 These barriers may be more severe for racial and ethnic minorities, and likely play an important role in the continued disparities in receipt of living donor kidney transplant.7
For these reasons, improving transplant knowledge and informed decision making for all kidney patients is critical. The Centers for Medicare and Medicaid Services requires that dialysis patients be informed of their option for transplant within the first 45 days of starting treatment.8 Several transplant education programs have been developed over the last decade to help inform dialysis patients seeking transplant about the risks and benefits of getting a transplant.9-16 Though significant progress has been made by these programs, access to transplant is still lower than desired, and additional improvement of transplant education interventions is needed.7 It is critically important for interventions to use reliable and valid transplant knowledge instruments in order to accurately characterize these interventions’ effects. To date, few measures of kidney transplant knowledge have been developed and published.17 Therefore, this study aimed to develop and validate a new transplant knowledge scale.
MATERIALS AND METHODS
Data Sets
This study used data from 2 randomized controlled trials with dialysis and patients presenting for transplant evaluation. The study protocols were published elsewhere.9,18 In both studies, knowledge of kidney transplant was assessed before and after administration of the interventions. Only the baseline assessment from each of these studies was used so that transplant knowledge levels observed in these data are not biased by the interventions. The first study (study 1) was conducted with English-speaking, black, white, and Hispanic adults (18+) who contacted the UCLA Kidney Transplant Program to begin evaluation for kidney transplant between May 2014 and March 2017. Study 1 contributed 733 patients to the analysis sample. The second study (study 2) was conducted with English-speaking, black and white adult (18+) dialysis patients in dialysis centers throughout Missouri between 2014 and 2016. Study 2 contributed 561 patients to the analysis sample, resulting in a total of 1294 patients for the analyses. The UCLA Institutional Review Board approved the protocols used to collect the data in both studies (study 1: 14-000382; study 2, 14-000802) and both trials were registered with ClinicalTrials.gov (study 1, NCT02181114; study 2, NCT02268682).
Transplant Knowledge Items
Seventeen transplant knowledge items were administered to participants by a research coordinator over the telephone in each study (Table 1). On average, it took 5 to 10 minutes to complete the measure. Ten of the items had “true/false/don’t know” response options and 7 had multiple choice, each of which also included a “don’t know” option. Item development involved a multidisciplinary, clinical and academic team of nephrologists, dialysis staff, and psychologists with expertise in kidney disease and patient-reported health measure development. After a systematic review of the literature and formative qualitative19 and quantitative20 research with previous transplant recipients, each item was written by A.D.W. Items were written to specifically address questions important to kidney patients in the formative research (eg, likelihood of negative impact on living kidney donors covered by items I1 and I17 (Table 1). Then, each item was reviewed for relevance and appropriateness by the expert team. In addition, each item was reviewed for relevance and understandability by a racially diverse panel of kidney patients, including kidney transplant recipients and dialysis patients. Each item asks about a benefit, risk, or general fact about kidney transplantation. For this article, each item’s response was recoded as 0, “don’t know”; 1, “incorrect”; 2, “correct.” Items with missing responses were left missing and not assigned any other value. This coding orders responses to each item from the lowest transplant knowledge ability (does not know the answer) to the highest ability (correct answer).
TABLE 1.: Starting set of kidney transplant knowledge items and distribution of responses
Other Study Measures
In addition to the transplant knowledge items, we asked participants’ demographic characteristics including race/ethnicity, sex, age, level of education, type of health insurance, and whether the patient was on dialysis. Previous access to transplant education was assessed, including whether the patient had read brochures, watched videos, browsed the internet, or talked to their doctor/medical staff about transplant. Patients were asked whether they had ever received each type of transplant education material (“yes/no”), and, if they responded “yes,” how many hours they had spent on each. For each type, we dichotomized responses as having spent less than 1 hour versus 1 hour or longer, excluding patients who said they did not receive each type of education previously. We also assessed health literacy with 2 items. The first asks patients how often they require help reading hospital materials. Responses were dichotomized into “None of the Time” versus “A little, some, most, or a lot of the time.” The second item asks patients how confident they are filling out medical forms. Responses were dichotomized into “Extremely confident” versus “Quite a bit, somewhat, a little bit, not at all confident.”21
Transplant Knowledge Scale Evaluation
Item Response Theory (IRT) was used to assess the transplant knowledge items. Use of IRT offers several benefits, including being able to estimate the reliability of the scale at different locations along the underling knowledge continuum. We implemented a graded response model21,22 that yields an estimate of transplant knowledge (mean of 0 and standard deviation of 1), with higher scores indicating higher knowledge. The model also estimates item difficulty thresholds (b), defined as the location on the knowledge continuum where there is a 50% probability of answering below versus above the threshold. Because the transplant knowledge items in this article each have 3 response categories, each item has 2 b parameter estimates, one where the probability is 50% for selecting “don’t know” versus “incorrect” or “correct” (b1) and one where the probability is 50% for selecting “don’t know” or “incorrect” versus “correct” (b2). For example, an item with b1 of −1 and b2 of 1 has a difficultly such that respondents with transplant knowledge ability 1 standard deviation below the mean have a 50% probability of answering “don’t know” versus giving an incorrect or correct response, and those with a transplant knowledge of 1 standard deviation above the mean have a 50% probability of answering “don’t know” or giving an incorrect response versus a correct response.
A second item parameter, the discrimination parameter (a), indicates how well items differentiate between patients of lower and higher levels of knowledge (theta), with higher values indicating better discrimination. The discrimination parameter can also be presented in the more familiar factor loading metric. Item characteristic curves with the latent trait estimate (theta) on the x axis and the probability of response for each response category on the y axis are used to evaluate the performance of each response option. Each response category should have the highest likelihood of being selected somewhere along the underlying knowledge distribution and the location on the continuum where response categories are most likely to be chosen should support a monotonic relationship with theta.
Model parameters were estimated using full-information maximum likelihood estimation. After estimating item parameters using the graded response model, items with low discrimination (low factor loadings) were dropped. Model fit was assessed with Akaike’s information criterion and the Bayesian information criterion. For both statistics, lower values indicate better model fit. The scale reliability is estimated by the model directly.
A key assumption of unidimensional IRT models is that all the items represent a single underlying construct. As an initial test of unidimensionality, we conducted an exploratory factor analysis including all items and examined the ratio of the first eigenvalue to the second. Ratios of >4 indicate unidimensionality.22 In addition, unidimensionality was assessed with a confirmatory factor analysis (CFA) model wherein each item loaded onto a single factor. The CFA was fit in Mplus version 8, using weighted least squares with mean and variance adjustment estimation for categorical items.23 We took good model fit as evidence of unidimensionality. Model fit was defined as a root mean square error of approximation value less than 0.06 and a comparative fit index value greater than 0.90.24 A second key assumption is of IRT monotonicity, or that the probability of responding with higher response categories increases with higher levels of the underlying trait. In this context, we tested the hypothesis that patients with lower to higher underlying levels of transplant knowledge would have increasing probabilities of selecting incorrect responses versus responding “don’t know,” then of selecting correct responses versus incorrect responses. This hypothesis was tested for each item by regressing the sum of all items on each item separately and examining Duncan multiple range tests, a post hoc means comparison test applied after 1-way analysis of variance that helps determine rank ordering of means. For these tests, we took significant differences of means with ordering such that patients who gave “don’t know” responses had the lowest means and those with “correct” responses had the highest means. The monotonicity assumption would be evidenced if patients responding “don’t know,” an incorrect response, and a correct response would have increasing means on the summed scale, respectively. In addition, monotonicity is evidenced by item characteristic curves.
After item reduction and a final graded model was fit, a scale score was created for each respondent. We used the expected a posteriori approach to estimating transplant knowledge scores.25 Because expected a posteriori estimated scores are expressed on the z score metric, we linearly transformed them to a T score metric where,
The T scores do not have an upper and lower limit, but higher scores indicate higher transplant knowledge. IRT analyses were conducted in flexMIRT version 3.5.1.25
Statistical Analysis
All statistical tests used a P value less than 0.05 to indicate statistical significance and were conducted in SAS version 9.4.26 Participant characteristics were summarized with proportions, frequencies, means and ranges. We described each item by calculating the proportions and frequencies of each response option. After the transplant knowledge scale was created, we calculated the mean, standard deviation, and percentile scores. To test construct validity, we examined mean differences in the transplant knowledge scale T scores between groups of patients who had previously talked to doctors/medical staff, read brochures, browsed the internet, and watched videos about transplant for less than 1 hour versus 1 hour or longer with independent samples t tests. Cohen d effect sizes for these tests were calculated as the difference in mean T score between groups divided by the pooled standard deviation for the scale. Cohen cutoffs for this effect size estimate were used to determine its magnitude (small, 0.20 ≤ d < 0.50; medium, 0.50 ≤ d < 0.80; large, d ≥ 0.80).27
RESULTS
Participants
The largest proportion of participants were black (45%), male (57%), had a high school diploma or less education (42%), had Medicare (68%) or Medicaid (57%) insurance, and were on dialysis (82%) (Table 2). The mean age was 53 years. Most patients had previously talked to doctors/medical staff about transplant (86%) and read brochures about transplant (68%). Several characteristics differed between patients from the 2 contributing studies, including race/ethnicity, sex, education level, type of health insurance, and number of hours of transplant education received. Notably, the level of health literacy did not differ between these cohorts.
TABLE 2.: Patient characteristics (n = 1,294)
Transplant Knowledge Item Descriptions
The percentage of correct responses for items ranged between 18% (I15) and 78% (I3 and I7) (Table 1). Intuitively, items with “true/false/don’t know” response options were answered correctly more frequently than those with multiple choice response options. Each item was missing less than 0.5% of the responses.
IRT Modeling
The first and second eigenvalues from the exploratory factor analysis had a ratio of 4.5 (5.8/1.3), suggesting unidimensionality. The 1-factor CFA model fit reasonably well with root mean square error of approximation value of 0.06 and comparative fit index of 0.90. Though on the borderline of fit index cutoffs, these results suggest that the items are unidimensional and reflect a single, underlying factor. In addition, Duncan multiple range tests provided evidence that each item had monotonic responses in the expected pattern of “don’t know,” incorrect, and correct as mean summed scores increased.
Having evidenced unidimensionality and monotinicity, we proceeded to fit an IRT graded response model. The b1 difficulty parameters (“don’t know” vs other responses) represent the low end of the theta range (underlying transplant knowledge ability), and the b2 difficulty parameters (“correct” vs other responses) cover higher theta scores (Table 3). For example, for I13, patients with a theta value of −1.55 (approximately 1.5 standard deviations below the mean transplant knowledge level) had a 50% probability of answering “don’t know” versus “incorrect” or “correct.”; those with a theta value of 1.18 (over 1 standard deviation above the mean transplant knowledge level) had a 50% probability of answering the item correctly versus responding incorrectly or that they do not know the answer. The item characteristic curve shown in Figure 1 depicts this trend for I13 as an example, though curves were generated and inspected for all items. The easiest items, as determined by b1 and b2, were I2, I3, I7, and I10. The most difficult items were I13 and I15.
TABLE 3.: IRT transplant knowledge item parameters
FIGURE 1.: Item characteristic curve for I13 a.
Figure 1 also serves as an example of an item with well-distributed response options, because each response (“don’t know,” “incorrect,” “correct”) covers a unique area under the curve of transplant knowledge level. For example, for I13, patients with theta values between −3 and −1.55 (lowest transplant knowledge ability) have a higher probability of responding “don’t know” than to give an incorrect response. In turn, patients with theta values of −1.56 to 1.00 have a higher probability of giving an incorrect response than to give a correct response or respond that they do not know the answer. Finally, patients who had theta values of 1.01 to 3 have a higher probability of answering correctly than to give an incorrect response or to respond that they do not know the answer. Most of the multiple-choice items showed this pattern, though most of the true/false/don’t know items did not.
Item discriminations ranged between 0.55 and 1.21. The 2 items with the lowest discrimination were I2 and I4, and each of these had a factor loading of <0.40. For this reason, we omitted I2 and I4 from the scale. After removing these 2 items, model fit improved: Akaike’s information criterion decreased from 39 221.95 to 34 467.35, and Bayesian information criterion decreased from 39 485.39 to 34 699.79.
The test information function for the 15-item scale (after removing I2 and I4) is shown in Figure 2. Information was highest at theta of −1.2 (approximately 1 standard deviation below the mean), where the information value was 4.77, entailing a reliability of 0.80. Information was lowest at the highest theta values, ranging between theta of 2.0 and 2.8, entailing reliabilities of 0.52 to 0.65 within this range. The marginal reliability of the 15-item scale was 0.75, indicating acceptable reliability.
FIGURE 2.: Test information curve for Knowledge Assessment of Renal Transplantation (KART)a.
We created the Knowledge Assessment of Renal Transplantation (KART) from these 15 items. On the T score metric (mean = 50, SD = 10), the lowest observed score was 10.9 and the highest was 75.5. Using the information from Tables 1 and 4, the KART scale can be administered and scored. Table 1 shows all the items and possible responses, indicating whether or not each is included in the 15 item KART scale. Then, Table 4 shows conversions from summed scores to T scores. After administering the 15 KART items, responses can be recoded as 0, “don’t know”; 1, incorrect response; 2, correct response. Then, these responses are summed to obtain a raw score ranging from 0–30. Finally, Table 4 is used to match the raw score with the T score.
TABLE 4.: Summed to T score conversion for the Knowledge Assessment of Renal Transplantation (KART)
Construct Validity
Table 5 shows differences in the KART T scores between patients who had previously talked to doctors/medical staff, read brochures, browsed the internet, and watched videos about transplant for less than 1 hour versus 1 hour or longer. All differences were significant at P less than 0.001 and effect sizes were of small to medium magnitude, ranging from d of 0.44 to 0.64.
TABLE 5.: Differences in mean Knowledge Assessment of Renal Transplantation (KART) T scores between patients receiving ≥1 and < 1 h of various kidney transplant education approaches
DISCUSSION
Kidney patients present to community nephrologists’ offices, dialysis centers and transplant centers with varying levels of transplant knowledge. Reliability and validity in transplant knowledge assessment is necessary to scientifically study efforts to improve transplant knowledge among these patients. Reliable and valid knowledge outcome knowledge measures can help providers conduct brief knowledge assessments and tailor education and discussion accordingly. They can also help ensure investigators assess the efficacy of individual educational trials and allow comparisons of transplant knowledge to occur across trials. After a comprehensive development process and psychometric evaluation of the KART, a new, general kidney transplant knowledge scale covering both living and deceased donation, we found evidence supporting the KART’s reliability and construct validity for use with diverse patients on dialysis and those seeking kidney transplantation.
The KART may be immediately helpful in research trials assessing the efficacy of transplant education programs. In addition, other research designs, such as population-based surveys or cohort studies examining the average level of transplant knowledge in a targeted patient population may also benefit from use of this measure.28 Current transplant education programs that have not been rigorously tested for improving transplant knowledge could incorporate the KART for this purpose.29
The KART is brief at only 15 items. Also, past research has shown that patients who present to the transplant center to begin evaluation often have little knowledge about it.28 Transplant knowledge screening using the KART for patients who are in most need of educational support can help direct staff resources most efficiently.30 The summed score to T score conversion table found in Table 4 can be used to obtain KART scores easily if it is administered in the clinical transplant education or quality improvement settings. The T score metric based on IRT scores is often superior to summed scores because they incorporate information about the items’ difficulty and discriminatory ability, they also provide a truly linear scale with equal distance between values (eg, the difference between values of 1 and 2 is the same as the difference between values of 2 and 3), and the score for a group is always able to be evaluated in terms of standard deviations from the mean.
In addition, because the KART is based on IRT parameters, it could be administered as a computer-adaptive test. This would allow fewer items from the KART to be administered while retaining its current measurement properties, or to administer different subsets of items from the KART that can be mapped back to a common metric. This flexibility could be leveraged to aid in efficient, flexible transplant knowledge assessments in a variety of settings.
The KART evidenced the highest reliability for patients with lower levels of kidney transplant knowledge. The patients most likely to benefit from kidney transplant education interventions are those with low transplant knowledge, including patients whose kidneys have not yet failed or who have just started dialysis. Therefore, the KART is most reliable for the patients with whom it may be most critical to assess kidney transplant knowledge. We also note that the reliability of the KART is lower than, both overall and at various theta values, previously published cutoffs for individual use, that is, 0.90 or greater.31 Despite KART reliability values falling below this cutoff, we believe it may have value for clinical use with individual patients in some cases where its content is relevant for the clinical application. However, we do recommend caution in interpreting its results in these cases. On the other hand, reliability almost always exceeded standards for group comparisons (>0.70), indicating an important role for KART not only in research, but in dialysis-based or transplant center–based quality improvement projects to examine the impact of patient education materials about kidney transplant.
One unique aspect of the KART is its use of “don’t know” responses in the scoring algorithm. Like other studies, we found evidence that individuals who select the “don’t know” response have the lowest level of knowledge.32,33 However, there is no consensus on whether or not use of a “don’t know” option is appropriate for tests of knowledge, and tradeoffs associated with including this option must be considered. Some benefits of including a “don’t know” option include increased reliability and increased score accuracy.34 In addition, scales with “don’t know” options included in scoring may be more sensitive to knowledge improvements associated with educational interventions, especially for individuals with very low knowledge at baseline.35 On the other hand, inclusion of “don’t know” responses may reduce construct validity by introducing variance in responses not related to the level of transplant knowledge itself, because it would lead individuals with lower willingness to guess the answer to an item not to choose one of the substantive response options.34 Future studies should further explore the optimal scoring approach for the KART.
This study has several limitations. First, these data only included English-speaking patients. With a large presence of Spanish-speaking kidney patients in the United States,1 this measure’s psychometric properties must be explored and tested among this group specifically. Next steps include differential item functioning analyses with these patient populations, as well as with other key patient subgroups, like race/ethnicity, age, and sex groups. In addition, the items included in the KART were intended for patients receiving dialysis or seeking transplantation, and their measurement properties were tested only among a limited sample of kidney patients in the United States meeting these criteria. Additional patient populations, including some patients in CKD stages 3 and 4 who have not yet started dialysis and kidney patients outside of the United States, also may require assessment of kidney transplant knowledge. The measurement properties of the KART should also be examined with these patient populations. Finally, the sample of patients used here participated in 2 randomized controlled trials. Therefore, it may not represent the national ESRD population in terms of clinical characteristics. Next steps would include administration of this instrument in large probability samples.
Future updates to the KART should also include more difficult items that better test transplant knowledge among patients experienced with kidney transplant seeking, but who could still learn important information about its process and outcomes. These may include more questions on the risks of transplant, as research emerges to increase that knowledge base. Although the KART provides a brief assessment of general transplant knowledge, in cases where targeted focus on specific elements of kidney transplantation (eg, living donation), the subscales of the Rotterdam Renal Replacement Therapy Knowledge Test may be a better option to use, as that test has also demonstrated good measurement properties.17 Finally, this study used secondary data from previous studies not suited for some reliability and validity analyses, including test-retest reliability and construct validity tests against other knowledge scales like the Rotterdam Renal Replacement Therapy Knowledge Test. Future, prospective studies should seek to conduct these analyses.
In conclusion, the 15-item KART can be used in both research and clinical settings to determine the kidney transplant knowledge of ESRD patients and to tailor educational interventions and discussions accordingly. Because it is brief, the KART can be used without imposing significant burden on patients or clinicians. Using this measure in studies and in clinic with such patients will mark an improvement over measures without demonstrated psychometric properties.
ACKNOWLEDGMENTS
The authors would like to thank Drs. Peter Bentler and Steve Reise for their advice on psychometric modeling.
REFERENCES
1. United States Renal Data System. 2017 USRDS annual data report: epidemiology of kidney disease in the United States. 2017. Bethesda, MD:National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases. DOI:
https://doi.org/10.1053/j.ajkd.2018.01.002.
2. Ogutmen B, Yildirim A, Sever MS, et al. Health-related quality of life after kidney transplantation in comparison intermittent hemodialysis, peritoneal dialysis, and normal controls. Transplant Proc. 2006;38:419–421.
3. von der Lippe N, Waldum B, Brekke FB, et al. From dialysis to transplantation: a 5-year longitudinal study on self-reported quality of life. BMC Nephrol. 2014;15:191.
4. Kucirka LM, Grams ME, Balhara KS, et al. Disparities in provision of transplant information affect access to kidney transplantation. Am J Transplant. 2012;12:351–357.
5. Salter ML, Gupta N, King E, et al. Health-related and psychosocial concerns about transplantation among patients initiating dialysis. Clin J Am Soc Nephrol. 2014;9:1940–1948.
6. Salter ML, Kumar K, Law AH, et al. Perceptions about hemodialysis and transplantation among African American adults with end-stage renal disease: inferences from focus groups. BMC Nephrol. 2015;16:49.
7. Purnell TS, Luo X, Cooper LA, et al. Association of race and ethnicity with live donor kidney transplantation in the United States from 1995 to 2014. JAMA. 2018;319:49–61.
8. Centers for Medicare & Medicaid Services. Medicare and Medicaid Programs; Conditions for Coverage for End-Stage Renal Disease Facilities; Final Rule. Department of Health and Human Services. 2008;73. Baltimore, MD:Federal Register.
9. Waterman AD, McSorley AM, Peipert JD, et al. Explore Transplant at Home: a randomized control trial of an educational intervention to increase transplant knowledge for Black and White socioeconomically disadvantaged dialysis patients. BMC Nephrol. 2015;16:150.
10. Waterman A, Hyland S, Goalby C, et al. Improving transplant education in the dialysis setting: the “explore transplant” initiative. Dial Transplant. 2010;39:236–241.
11. Rodrigue JR, Paek MJ, Egbuna O, et al. Making house calls increases living donor inquiries and evaluations for blacks on the kidney transplant waiting list. Transplantation. 2014;98:979–986.
12. Rodrigue JR, Cornell DL, Kaplan B, et al. A randomized trial of a home-based educational approach to increase live donor kidney transplantation: effects in blacks and whites. Am J Kidney Dis. 2008;51:663–670.
13. Strigo TS, Ephraim PL, Pounds I, et al. The TALKS study to improve communication, logistical, and financial barriers to live donor kidney transplantation in African Americans: protocol of a randomized clinical trial. BMC Nephrol. 2015;16:160.
14. Boulware LE, Hill-Briggs F, Kraus ES, et al. Effectiveness of educational and social worker interventions to activate patients’ discussion and pursuit of preemptive living donor kidney transplantation: a randomized controlled trial. Am J Kidney Dis. 2013;61:476–486.
15. Arriola KR, Powell CL, Thompson NJ, et al. Living donor transplant education for African American patients with end-stage renal disease. Prog Transplant. 2014;24:362–370.
16. Patzer RE, Basu M, Larsen CP, et al. iChoose Kidney: a clinical decision aid for kidney transplantation versus dialysis treatment. Transplantation. 2016;100:630–639.
17. Ismail SY, Timmerman L, Timman R, et al. A psychometric analysis of the Rotterdam Renal Replacement Knowledge-Test (R3K-T) using item response theory. Transpl Int. 2013;26:1164–1172.
18. Waterman AD, Robbins ML, Paiva AL, et al. Your path to transplant: a randomized controlled trial of a tailored computer education intervention to increase living donor kidney transplant. BMC Nephrol. 2014;15:166.
19. Waterman AD, Stanley SL, Covelli T, et al. Living donation decision making: recipients’ concerns and educational needs. Prog Transplant. 2006;16:17–23.
20. Waterman AD, Barrett AC, Stanley SL. Optimal transplant education for recipients to increase pursuit of living donation. Prog Transplant. 2008;18:55–62.
21. Chew LD, Griffin JM, Partin MR, et al. Validation of screening questions for limited health literacy in a large VA outpatient population. J Gen Intern Med. 2008;23:561–566.
22. Slocum-Gori SL, Zumbo BD. Assessing the unidimensionality of psychological scales: using multiple criteria from factor analysis. Soc Indic Res. 2011;102:443–461.
23. Muthén LK, Muthén BO. Mplus User’s Guide. 2017. Los Angeles, CA.
24. Tabachnick BG, Fidell LS. Using Multivariate Statistics. 2001. Boston, MA:Allyn and Bacon.
25. Houts CR, Cai L. flexMIRT user’s manual version 3.5: Flexible Multilevel Multidimensional Item Analysis and Test Scoring. 2016. Chapel Hill, NC.
26. SAS Institute I. Base SAS(R) 9.4 Procedures Guide: Statistical Procedures. 2013 2nd ed. Cary, NC:SAS Institute, Inc.
27. Cohen J.
Statistical Power Analysis for the Behavioral Sciences. 1988. New York:Academic Press.
28. Waterman AD, Peipert JD, Hyland SS, et al. Modifiable patient characteristics and racial disparities in evaluation completion and living donor transplant. Clin J Am Soc Nephrol. 2013;8:995–1002.
29. Waterman AD, Robbins ML, Peipert JD. Educating prospective kidney transplant recipients and living donors about living donation: practical and theoretical recommendations for increasing living donation rates. Curr Transplant Rep. 2016;3:1–9.
30. Waterman AD, Peipert JD, Goalby CJ, et al. Assessing transplant education practices in dialysis centers: comparing educator reported and medicare data. Clin J Am Soc Nephrol. 2015;10:1617–1625.
31. Nunnally JC. Psychometric theory. 1978 2nd ed. New York:McGraw-Hill.
32. Leigh JH Jr, Martin CR Jr. “Don’t know” item nonresponse in a telephone survey: effects of question form andrespondent characteristics. J Market Res. 1987;24:418–424.
33. Maris E. Psychometric latent response models. Psychometrika. 1995;60:523–547.
34. Ravesloot CJ, Van der Schaaf MF, Muijtjens AM, et al. The don’t know option in progress testing. Adv Health Sci Educ Theory Pract. 2015;20:1325–1338.
35. Cherry KE, Brigman S, Hawley KS, et al. The knowledge of memory aging questionnaire: effects of adding a “don’t know” response option. Educ Gerontol. 2003;29:427–446.