Advances in clinical management of kidney transplant recipients have led to marked improvements in short-term allograft survival in recent decades, mediated in part through a reduction in the incidence of acute rejection (AR) (1). Factors associated with lower AR rates include the advent of more potent maintenance immunosuppressive agents, better immunologic matching of donor and recipients, and use of induction therapies in the majority of contemporary of renal transplant recipients. Although at least one AR episode occurred in 50% to 60% of renal allograft recipients in the 1980s, the incidence of AR within the first posttransplantation year has declined to less than 15% in some recent studies (1).
Despite the reduced frequency, AR episodes are believed to have significant detrimental impacts on long-term clinical outcomes including graft and patient survival. AR remains a risk factor for the development of chronic allograft nephropathy, which, in turn, is responsible for most death-censored graft loss after the first posttransplantation year (2). Notably, however, long-term therapy with calcineurin inhibitor agents, a cornerstone of many contemporary maintenance regimens for preventing AR, has also been associated with chronic allograft injury and, potentially, graft loss (3–5).
Improved understanding of the pathobiology of AR has resulted in distinct treatment protocols that can successfully reverse many rejection episodes. However, the risk of graft loss after AR and treatment approaches including the need for cell-depleting antibody (Ab) therapy seem to be mediated by the clinical and histologic severity of AR, and possibly by the duration of time from transplant to the AR event (1, 2). The development of humoral rejection, as manifested by the presence of de novo donor-specific Ab or increased donor-specific Ab in the setting of prior desensitization, may further impair allograft survival (6). Finally, preexisting graft dysfunction may potentially exaggerate the effect of AR on allograft functional decline.
To advance understanding of the clinical implications of AR in contemporary practice, we examined national registry data for a recent cohort of Medicare-insured kidney transplant recipients in the United States. Using these data we sought to quantify the relative risk of long-term graft loss associated with AR events characterized by timing of the AR after transplantation, risk period after AR reporting, and the requirement for Ab treatment versus other therapy as an index of AR severity. We also performed a subanalysis among survivors with graft function to the first transplant anniversary to characterize the combined impacts of both AR and renal function at 12 months.
Demographic and Clinical Characteristics
The total sample of Medicare-insured kidney transplant recipients in 2000 to 2007 (n=48,179) included 29,829 standard criteria donor (SCD), 11,848 living donor (LD), and 6502 expanded criteria donor (ECD) graft recipients. Median follow-up was 2.8 years. The distribution of recipient, donor, and transplant characteristics according to donor type (SCD, LD, or ECD) is shown in Table 1. Recipients of LD allografts were younger, more commonly white, and less frequently presented with diabetes, sensitization, or delayed graft function (DGF) compared with deceased donor recipients. ECD recipients tended to be the oldest group and to have the highest frequency of diabetes.
Frequency of AR per Graft-Years at Risk
The incidence densities of AR per 100 graft-years at risk after transplantation by donor type are given in Table 2 (the rate for the full sample is shown in Figure 1, SDC, http://links.lww.com/TP/A675). AR occurred more frequently early after transplantation (e.g., within 6 months or 1 year) compared with subsequent observation intervals. Non–Ab-treated AR was consistently more than twice as common as Ab-treated AR by risk period and donor type (Table 2). For example, among SCD recipients, the frequencies of Ab-treated AR per 100 graft-years at risk were 3.76, 3.02, 1.99, and 1.62 over the first 6, 12, 24, and 36 months after transplantation, respectively. The frequencies of non–Ab-treated AR per 100 graft-years at risk among SCD recipients were 9.93, 8.43, 5.71, and 4.70 over the first 6, 12, 24, and 36 months after transplantation, respectively. The frequency of Ab-treated AR among LD and ECD recipients within risk periods was similar to the frequency among SCD recipients. Non–Ab-treated AR tended to be slightly less frequent among LD compared with SCD recipients but tended to be more common among ECD compared with SCD recipients. (The counts of observed AR events and graft time at risk used to compute rates of AR per 100 graft-years are provided as Table 1, SDC, http://links.lww.com/TP/A675).
Adjusted Relative Risks of All-Cause Graft Loss According to the Category and Timing of AR
The adjusted relative risks of graft loss associated with AR according to the timing of AR (0–6, 7–12, 13–24, and 25–36 months after transplant) and risk period after the AR report (<90 or ≥90 days) from time-varying Cox regression models among SCD, LD, and ECD transplant recipients are shown in Table 3. Among SCD recipients, all categories of AR events were associated with significantly increased risk of all-cause graft loss compared with the absence of AR, and the development of Ab-treated AR was associated with a greater risk of graft loss than non–Ab-treated AR. The adjusted relative risk for graft loss with both Ab-treated AR and non–Ab-treated AR generally increased when AR occurred later after SCD transplantation.
Regardless of the time between transplantation and the development of AR, the risk of SCD graft loss was highest within the first 3 months after AR reporting (Table 3). For example, the relative risk of SCD graft loss within the first 89 days after AR reporting rose from 2.75 (95% confidence interval [CI], 1.78–4.28) for Ab-treated AR events reported within 0 to 6 months to 4.90 (95% CI, 2.51–9.54) for Ab-treated AR events reported within 25 to 36 months after transplantation. The relative risk of SCD graft loss 90 days or later after AR reporting rose from 1.35 (95% CI, 1.16–1.58) for Ab-treated AR events reported within 0 to 6 months to 2.60 (95% CI, 1.89–3.58) for Ab-treated AR events reported within 25 to 36 months after transplantation. The relative risk of SCD graft within the first 89 days after AR reporting rose from 1.95 (95% CI, 1.40–2.72) for non–Ab-treated AR events reported within 0 to 6 months to 2.83 (95% CI, 1.71–4.71) for non–Ab-treated AR events reported within 25 to 36 months after transplantation. Finally, the relative risk of SCD graft loss 90 days or later after AR reporting rose from 1.15 (95% CI, 1.03–1.28) for non–Ab-treated AR events reported within 0 to 6 months to 2.03 (95% CI, 1.63–2.52) for non–Ab-treated AR events reported within 25 to 36 months after transplantation.
Consistent patterns were observed when recipients of all transplants were considered together including adjustment for donor type and other clinical factors listed in Table 1 (Fig. 1). Among all transplant recipients, the point estimates for the relative risk of all-cause graft loss were higher for Ab-treated AR compared with non–Ab-treated AR in both the <90 day and ≥ 90 day risk windows after AR reporting across the study time periods after transplant. Statistically significant higher risk with Ab-treated AR compared with non-Ab-treated AR was reached in both the <90 day (P=0.02) and ≥ 90 day (P<0.0001) risk windows after first-year AR reports, the ≥ 90 day risk window after second-year AR reports (P=0.009), and the <90 day risk window after third-year AR reports (P=0.02). Similar trends were seen among recipients of LD and ECD transplants when considered separately, although with less precision and statistical significance because of the smaller sample sizes and fewer AR and graft loss events within periods of interest. No significant impact on graft loss risk was detected for Ab-treated AR events occurring within 6 months after transplantation among LD recipients or for Ab-treated AR events occurring beyond 12 months after transplantation among ECD recipients.
Association of Donor and Recipient Characteristics With All-Cause Graft Loss
The associations of baseline covariates with the risk of all-cause graft loss in these multivariate regression models were generally similar among SCD, LD, and ECD transplant recipients (Table 3). However, more significant covariate effects were observed for SCD transplant recipients, likely because of the larger sample of SCD transplants. Aside from AR, the strongest predictor of all-cause graft loss was DGF, which was associated with 1.7 to 3.2 times the adjusted relative risk of graft loss compared with no DGF across donor types. Because computation of estimated glomerular filtration rate (eGFR) required survival to the first transplant anniversary and was not reported for all patients, the primary analyses did not include 1-year eGFR as a predictor of all-cause graft loss.
Joint Impact of AR Within the First Year and eGFR at the First Transplant Anniversary on Subsequent All-Cause Graft Loss
Associations of AR within the first year after transplantation and eGFR at the first transplant anniversary with the risk of subsequent all-cause graft loss were examined among a subsample of 38,780 patients who survived with graft function to 12 months after transplantation and had available information for the computation of eGFR. After adjustment for eGFR and other baseline recipient, donor, and transplant factors considered in this study, Ab-treated AR and non–Ab-treated AR within the first year were associated with approximately 58% (adjusted hazard ratio [aHR], 1.58; 95% CI, 1.43–1.75) and 43% (aHR, 1.43; 95% CI, 1.34–1.53) increases, respectively, in the relative risk of subsequent all-cause graft loss compared with the absence of AR (see Table 2, SDC, http://links.lww.com/TP/A675). After adjustment for AR and other baseline factors, there was a graded increase in the risk of subsequent all-cause graft loss with lower eGFR level at 12 months. Compared with an eGFR of greater than 60 mL/min /1.73 m2, the adjusted relative risk of graft loss increased by 8% with an eGFR of 45 to 49 mL/min/1.73 m2 (aHR, 1.08; 95% CI, 1.01–1.14), 50% with an eGFR of 30 to 44 mL/min /1.73 m2 (aHR, 1.50; 95% CI, 1.41–1.60), and 216% with an eGFR of less than 30 mL/min/1.73 m2 (aHR, 3.16; 95% CI, 2.96–3.37).
The joint impact of first-year AR and eGFR on expected graft survival 5 years after the first transplant anniversary is shown in Figure 2. Among transplant recipients with first-year eGFR of greater than 60 mL/min /1.73 m2, all-cause graft survival 5 years later was 72.0% (95% CI, 70.7%–73.7%) among those without AR in months 0 to 12, 62.4% (95% CI, 59.9%–64.9%) in those who had experienced non–Ab-treated AR in months 0 to 12, and 59.9% (95% CI, 56.2%–63.1%) in those with Ab-treated AR in months 0 to 12. Although a pattern of reduced graft survival among patients with non–Ab-treated AR and Ab-treated AR compared with no AR was preserved across eGFR levels, the effect of low eGFR was stronger. Among transplant recipients with first-year eGFR of less than 30 mL/min /1.73 m2, all-cause graft survival 5 years later declined to 35.5% (95% CI, 33.5%–37.7%) among those without AR in months 0 to 12, 22.7% (95% CI, 20.2%–25.5%) in those who had experienced non–Ab-treated AR in months 0 to 12, and was only 19.9% (95% CI, 16.5%–23.3%) in those with Ab-treated AR in months 0 to 12.
Despite improvements in transplant practice, AR remains a significant source of morbidity, graft loss, and costs after renal transplantation. In this study of a recent cohort of U.S. kidney transplant recipients, we found that AR affects approximately 11% recipients of SCD recipients and nearly 15% of ECD allograft recipients in the first year after transplantation. AR events recognized later after transplantation have more serious graft loss implications, especially within the first 90 days after AR reporting. When considered in concert with first-year allograft function, the clinical impact of AR persists within eGFR levels but is smaller than the impact of allograft dysfunction alone.
This analysis extends previous studies by providing a detailed understanding of the importance of the timing and severity of AR on graft outcomes. Clearly, more severe rejection, as demonstrated by the requirement for Ab treatment, increases the risk of graft loss following AR, regardless of the timing of AR in relation to transplant. AR early after transplantation does seem to impact long-term, risk-adjusted graft survival. However, we also observed that impact of AR on graft survival increases with longer durations of time between transplantation and the diagnosis of AR, a finding that resonates with prior studies. A study of 687 transplant recipients at one U.K. center in 1984 to 1996 defined “early” AR as occurring within 90 days of transplantation and “later” AR as any AR event from 91 days out to a maximum of 14 years (7). Five-year graft survival rates were 87% in patients without AR, 63% in those with “early” AR, and 45% in those with “late” AR. Similarly, a cohort study of 654 deceased-donor transplant recipients in the Netherlands (1983–1997) that also dichotomized AR as before or after 90 days post-transplantation found that “late” rejection reduced 10-year graft survival (censored for death or nonimmunologic causes of graft loss) to 45%, compared with 94% survival in patients without AR and 86% survival in those with “early” AR (8). In a recent small series of kidney transplant recipients with AR identified in 2000 to 2007 at two centers in Mexico, patients with AR occurring later than 90 days after transplantation showed a larger decline in eGFR and increased markers of senescence on biopsy compared with those with earlier AR (9). Our current study expands the evaluation of the deleterious consequences of AR by categorizing AR into additional post-transplant evaluation periods, as well as partitioning the window after AR reporting, among a large national sample of U.S. transplant recipients.
The pathophysiology of the increased graft loss risk associated with late AR is likely mediated by multiple factors. This observation may reflect an increase in the development of a humoral component of AR in patients with AR later after transplantation (10). Human leukocyte antigen class II mismatches have been associated with early AR, whereas class I mismatches have been associated with late AR, suggesting roles of direct and indirect allorecognition pathways in the pathophysiology of early AR and late AR, respectively; indirect allorecognition may in turn be associated with a higher burden of chronic allograft nephropathy (8). Alternatively, because clinical monitoring is less intense at later periods after transplantation, there may be a longer delay between the onset AR and its recognition, which, in turn, may lead to more severe, less treatable graft injury. There may also be differences in the ability of the graft to heal after late versus early AR (9). Biomarkers and other noninvasive methods including genomics, proteomics, and metabolomics are being sought to identify patients at risk for the development of AR late after transplant who may benefit from prompt interventions (11).
Changes in contemporary immunosuppression practices have included prevalent use of expensive and potentially harmful induction agents. Although induction therapies may be associated with increased risks of infection and malignancy (e.g. posttransplant lymphoproliferative disorder) compared with no induction therapy (12), a recent review of United Network for Organ Sharing (UNOS) data found a significant benefit of induction therapy on long-term graft survival (13). Our analysis supports the belief that care processes that reduce the incidence of AR, particularly late after transplantation, have the potential to reduce graft loss in the long term. However, the benefits from reduced AR must be balanced against the risks of other treatment complications.
Limitations of our study include the lack of histologic definitions of the AR events within the Organ Procurement and Transplantation Network (OPTN) registry. The OPTN does not track which AR episodes represent cellular or humoral rejection. However, because both humoral rejection and cellular rejection are known to contribute to subsequent allograft loss, the effects demonstrated here are likely to be true for population-based analyses. Furthermore, the need for Ab treatment is an excellent marker of the severity of rejection, as was demonstrated by relative impacts on graft loss at all time points. Because exact dates of AR are not captured in OPTN reports, categorization of AR timing was defined by periods rather than specific dates, which reduces the granularity of our effect estimates. Our sample was limited to patients with Medicare, and findings may not be generalizable to patients with private health insurance. However, because Medicare is the principal payer for renal transplantation in the United States, it is likely that these observations are representative of the care for a dominant fraction of patients given the size and diversity of the study sample. Finally, as a sample of U.S. transplant recipients in 2000 to 2007, these patients were likely to have been managed primarily with calcineurin inhibitor or mammalian target of rapamycin–based im-munosuppression, and the outcome implications of AR among patients taking other maintenance immunosuppression regimens may be different than those that are reported in this analysis.
In conclusion, AR remains an important source of morbidity after renal transplantation. The clinical consequences of AR seem to increase with later AR events after transplantation, suggesting the possibility that improved recognition of AR, particularly through noninvasive testing, may be beneficial to more promptly identify treatable AR events. Strategies to reduce the incidence of AR have the potential to extend graft life, if this does not come at the cost of worsened graft function or other complications from excessive immunosuppression. Given the strong impact of eGFR on long-term graft survival, focus on prevention of AR at the expense of allograft function may result in a net reduction of graft and patient survival.
MATERIALS AND METHODS
Data Sources and Study Samples
Study data were drawn from records of the United States Renal Data System, which integrate OPTN records with Medicare billing claims (14). This study was conducted in accordance with the Health Insurance Portability and Accountability Act of 1996; all standards regarding the security and privacy of an individual’s health information were maintained.
The primary study sample comprised recipients of first, single-organ kidney transplants in the United States in 2000 to 2007 with Medicare as the primary payer at the time of transplantation, as previously defined (15). A subsample of patients who survived with graft function to the first transplant anniversary and had available information for the computation of eGFR at 12 months was examined to estimate the combined impacts of AR and eGFR on subsequent all-cause graft loss.
Outcome Variable Definitions
The primary clinical outcome of interest was time to all-cause graft loss defined by death or renal allograft failure. Mortality was defined as death from any cause. Graft failure was defined as the earliest reported date of return to maintenance dialysis or “preemptive” retransplantation. Patients were censored from survival analyses at the date of their last expected follow-up or end of study (December 2007).
Predictor Variable Definitions
The primary predictor of interest was AR as defined by OPTN reporting. The OPTN surveys centers for information on clinical events among individual transplant recipients at 6 months, 1 year, and then annually. Data on AR are identified according to the period covered by a specific reporting form, but dates of AR within the period are not collected. We defined AR based on center reporting on the OPTN survey an AR event occurred. Immunosuppression records were used to subclassify AR as Ab-treated AR or non–Ab-treated AR as a measure of AR severity. Ab-treated AR was defined by administration of polyclonal Abs such as antithymocyte globulin or antilymphocyte globulin or monocloncal Abs such as OKT3, alemtuzumab, or rituximab for the indicated purpose of treating AR. Patients with any Ab-treated AR event in a period were classified as having Ab-treated AR in that period, as the first level of classification. Patients with other indications of AR in a period, who did not meet criteria for Ab-treated AR were classified as having non–Ab-treated AR in the given period.
Renal function at 1 year after transplantation was defined by eGFR as computed with the abbreviated Modification of Diet in Renal Disease equation: eGFR (mL/min per 1.73 m2)=186×(serum creatinine [mg/dL])−1.154×age−0.203×(1.212, if African American)×(0.742, if female) (16). Serum creatinine values were drawn from the OPTN 1-year recipient follow-up reporting form. The abbreviated Modification of Diet in Renal Disease equation has superior performance for prediction of measured GFR among renal transplant patients when compared with the Nankivell and Cockcroft-Gault formulas (17). Renal function was categorized by levels of function as: > 60, 45–59, 30–44, and <30 mL/min/1.73 m2.
Data management and analysis were performed with SAS for Windows software, version 9.2 (SAS Institute Inc., Cary, NC). Continuous data were summarized as mean (SD), and categorical data were summarized as proportions. The primary description of the frequency of AR was computed as incidence density, defined as the ratio of affected patients per graft-years at risk. Graft-years at risk were computed as the time from transplantation to the earliest of graft loss, loss to follow-up, or end of a period of evaluation (considered here as 6 months and years 1, 2, and 3 after transplantation).
Associations of AR with subsequent all-cause graft loss (aHR) were estimated with time-varying, multivariate Cox regression. Time-varying models allow unbiased estimation of the relative risks of an outcome associated with posttransplantation events, as previously illustrated in the transplant literature (18–20). The primary Cox regression models included AR events ascertained from OPTN reports covering months 0 to 6, 7 to 12, 13 to 24, and 25 to 36 after transplantation as predictors of interest. These models estimate the relative risk of all-cause graft loss associated with time-dependent, episodic exposures by attributing the exposure date to the end of the OPTN reporting period. The risk of subsequent graft loss associated with AR events identified in these intervals was further partitioned within the first 89 days or 90 days and beyond the end of the reporting periods. Models were constructed separately for recipients of SCD, LD, and ECD allografts.
The combined impact of AR within the first year and eGFR at the first transplant anniversary on subsequent all-cause graft loss was examined among a subsample of patients who survived with graft function to at least 12 months after transplantation and had available information for the computation of eGFR on the first-year OPTN report. The adjusted relative risks of all-cause graft loss associated with AR and eGFR levels were estimated by multivariate Cox regression using the first transplant anniversary as the origin time. In this framework, AR within months 0 to 12 after transplantation was a baseline characteristic rather than a time-varying factor. Survival expectations to 5 years after the first transplant anniversary among 1-year survivors according to AR status and eGFR level were predicted from multivariate Cox regression models, including adjustment for baseline recipient, donor, and transplant factors. Values of adjustment covariates were set to average sample values for prediction.
Adjustment covariates included recipient, donor, and transplant factors in the UNOS Kidney Allocation Review Committee survival model (21). All demographic and clinical characteristics known at the time of transplantation were included in exact accordance with the UNOS models, with the exception of shared organ status, which was not present in the United States Renal Data System database available for public use. Recipient and donor races are omitted from the UNOS models but were included here (see Appendix, SDC, http://links.lww.com/TP/A675). No statistical variable selection was performed, such that the content of all regression models was determined before analysis.
1. Nankivell BJ, Alexander SI. Rejection of the kidney allograft. N Engl J Med
2010; 363: 1451.
2. Wu O, Levy AR, Briggs A, et al.. Acute rejection
and chronic nephropathy: a systematic review of the literature. Transplantation
2009; 87: 1330.
3. Stegall MD, Park WD, Larson TS, et al.. The histology of solitary renal allografts at 1 and 5 years after transplantation. Am J Transplant
2011; 11: 698.
4. Nashan B. Is acute rejection
the key predictor for long-term outcomes after renal transplantation when comparing calcineurin inhibitors? Transplant Rev (Orlando)
2009; 23: 47.
5. Gaston RS. Chronic calcineurin inhibitor nephrotoxicity: reflections on an evolving paradigm. Clin J Am Soc Nephrol
2009; 4: 2029.
6. Cooper JE, Gralla J, Cagle L, et al.. Inferior kidney allograft outcomes in patients with de novo donor-specific antibodies are due to acute rejection
2011; 91: 1103.
7. Joseph JT, Kingsmore DB, Junor BJ, et al.. The impact of late acute rejection
after cadaveric kidney transplantation
. Clin Transplant
2001; 15: 221.
8. Sijpkens YW, Doxiadis II, Mallat MJ, et al.. Early versus late acute rejection
episodes in renal transplantation. Transplantation
2003; 75: 204.
9. Arvizu-Hernandez M, Morales-Buenrostro LE, Vilatoba-Chapa M, et al.. Time of occurrence of kidney acute antibody-mediated allograft rejection/acute cellular rejection and cell senescence: implications for function outcome. Transplant Proc
2010; 42: 2486.
10. Sun Q, Liu ZH, Ji S, et al.. Late and early C4d-positive acute rejection
: different clinico-histopathological subentities in renal transplantation. Kidney Int
2006; 70: 377.
11. Mannon RB. Immune monitoring and biomarkers to predict chronic allograft dysfunction. Kidney Int
2010; 78 (suppl 119): S59.
12. Padiyar A, Augustine JJ, Hricik DE. Induction antibody therapy in kidney transplantation
. Am J Kidney Dis
2009; 54: 935.
13. Cai J, Terasaki PI. Induction immunosuppression improves long-term graft and patient outcome in organ transplantation: an analysis of United Network for Organ Sharing registry data. Transplantation
2010; 90: 1511.
15. Whiting JF, Woodward RS, Zavala EY, et al.. Economic cost of expanded criteria donors in cadaveric renal transplantation: analysis of Medicare
2000; 70: 755.
16. Levey AS, Greene T, Kusek JW, et al.. A simplified equation to prediction glomerular filtration rate from serum creatinine. J Am Soc Nephrol
2000; 11: 155A.
17. Poggio ED, Wang X, Weinstein DM, et al.. Assessing glomerular filtration rate by estimation equations in kidney transplant recipients. Am J Transplant
2006; 6: 100.
18. Glanton CW, Kao TC, Cruess D, et al.. Impact of renal transplantation on survival in end-stage renal disease patients with elevated body mass index. Kidney Int
2003; 63: 647.
19. Lentine KL, Xiao H, Brennan DC, et al.. The impact of kidney transplantation
on heart failure risk varies with candidate body mass index. Am Heart J
2009; 158: 972.
20. Salvalaggio PR, Dzebisashvili N, Pinsky B, et al.. Incremental value of the pancreas allograft to the survival of simultaneous pancreas-kidney transplant recipients. Diabetes Care
2009; 32: 600.
21. Wolfe RA, McCullough KP, Schaubel DE, et al.. Calculating life years from transplant (LYFT): methods for kidney and kidney-pancreas candidates. Am J Transplant
2008; 8 (4 pt 2): 997.