Secondary Logo

Waiting time on dialysis as the strongest modifiable risk factor for renal transplant outcomes: A Paired Donor Kidney Analysis1

Meier-Kriesche, Herwig-Ulf2,3; Kaplan, Bruce2

Clinical Transplantation
Free
SDC

Background.  Waiting time on dialysis has been shown to be associated with worse outcomes after living and cadaveric transplantation. To validate and quantify end-stage renal disease (ESRD) time as an independent risk factor for kidney transplantation, we compared the outcome of paired donor kidneys, destined to patients who had ESRD more than 2 years compared to patients who had ESRD less than 6 months.

Methods.  We analyzed data available from the U.S. Renal Data System database between 1988 and 1998 by Kaplan-Meier estimates and Cox proportional hazards models to quantify the effect of ESRD time on paired cadaveric kidneys and on all cadaveric kidneys compared to living-donated kidneys.

Results.  Five- and 10-year unadjusted graft survival rates were significantly worse in paired kidney recipients who had undergone more than 24 months of dialysis (58% and 29%, respectively) compared to paired kidney recipients who had undergone less than 6 months of dialysis (78% and 63%, respectively;P <0.001 each). Ten-year overall adjusted graft survival for cadaveric transplants was 69% for preemptive transplants versus 39% for transplants after 24 months on dialysis. For living transplants, 10-year overall adjusted graft survival was 75% for preemptive transplants versus 49% for transplants after 24 month on dialysis.

Conclusions.  ESRD time is arguably the strongest independent modifiable risk factor for renal transplant outcomes. Part of the advantage of living-donor versus cadaveric-donor transplantation may be explained by waiting time. This effect is dominant enough that a cadaveric renal transplant recipient with an ESRD time less than 6 months has the equivalent graft survival of living donor transplant recipients who wait on dialysis for more than 2 years.

2 Division of Nephrology, Hypertension and Transplantation, University of Florida, Gainesville, FL.

3 Address correspondence to: Herwig-Ulf Meier-Kriesche, M.D., Department of Internal Medicine, Division of Nephrology, Hypertension and Transplantation, 1600 SW Archer Road, Box 100224, Gainesville, FL 32610–0224. E-mail: meierhu@medicine.ufl.edu.

1 The data reported in this study have been supplied by the U.S. Renal Data System and the U.S. Scientific Renal Transplant Registry. The interpretation and reporting of these data are the responsibility of the authors and in no way represent an official policy or interpretation of the U.S. government.

Received 12 June 2002.

Accepted 9 July 2002.

Kidney transplantation is considered the treatment modality of choice for the majority of patients with end-stage-renal disease (ESRD). Preemptive transplantation has been advocated over transplantation after a period of dialysis. Initially this position was motivated by the observation that preemptive renal transplant recipients were doing significantly better than patients who had undergone longer periods of maintenance dialysis (1,2). These studies, however, could not exclude the potential selection bias of lower risk patients who obtain preemptive transplants and, therefore, could not directly implicate dialysis as a causal factor for the worse graft survival in transplants after maintenance dialysis. Evidence that time on dialysis in itself conferred a higher risk for graft loss after transplantation came initially from a single-center study by Cosio et al. who showed that increased time on dialysis before transplantation was associated with decreased patient and graft survival (3). The argument that time on dialysis itself is an independent risk factor for graft loss was strengthened by a subsequent retrospective study that was based on U.S. Renal Data System (USRDS) data that showed a clear dose effect of the detrimental effect of dialysis time on transplant outcomes not only for patient and graft survival but somewhat surprisingly also for death-censored graft survival in both cadaveric and living transplantation (4). In addition, this study found that the dose-dependent detrimental effect of dialysis time was proportional across different primary disease groups, making an argument against the hypothesis that the risk of increased ESRD time was only related to cumulative disease burden. Shortly thereafter, Mange et al. confirmed the better outcomes of living donated grafts in preemptive transplants versus patients on dialysis for longer periods of time (5).

All previous studies looked at the relative impact of ESRD time on subsequent renal transplant outcomes, but they did not quantify this risk factor. In addition, it was difficult to quantify ESRD time as a risk factor unless proven to be independent of potential donor-related confounding factors. It is conceivable that part of the negative effect of ESRD time is related to poorer kidney grafts going to people who have been on the waiting list for longer times.

For this reason, we decided to first investigate whether ESRD time is a risk factor for outcomes after kidney transplantation independent of donor factors and, if so, to subsequently quantify the absolute impact of ESRD time in cadaveric and living transplantation. Identifying ESRD time as a donor-independent risk factor would be of great significance because ESRD time would have to be considered a modifiable risk factor for kidney transplantation.

To ascertain that ESRD time before kidney transplantation is a significant risk factor for graft survival independent of donor factors, we analyzed 2,405 kidney pairs harvested from the same donor and transplanted subsequently into one recipient with short ESRD time and the other in a recipient with long ESRD time (6). We also assessed overall 5- and 10-year graft survival rates by length of pretransplant dialysis in living versus cadaveric transplants in an attempt to quantify the relative impact of ESRD time versus living donation in determining long-term outcomes after transplantation.

Back to Top | Article Outline

MATERIALS AND METHODS

We retrospectively analyzed data available from the USRDS for renal transplantations performed between 1988 and 1998 in the United States. In the database, we identified all cadaveric donors from whom two kidneys had been available for transplantation. We limited the analysis to kidney pairs that would go to primary adult, single-organ, renal transplant recipients. All pairs of which one kidney went to a six-antigen–matched recipient were excluded from the analysis. We then identified those kidney pairs that went to one recipient who had been on dialysis for less than 6 months, including preemptive transplants, and to the other recipient who had been on dialysis for more than 2 years.

Study endpoints for this cohort of patients were overall graft survival, patient survival, death-censored graft survival, and patient survival with a functioning graft. We compared the study endpoints between the kidney pairs by Kaplan-Meier analysis and estimated whether observed differences were significant by the log-rank test.

In addition, we used a Cox proportional hazards model to obtain adjusted survival rates for short versus long pretransplant ESRD time. These models were adjusted for known risk factors for graft and patient survival such as recipient demographics (but not donor demographics), HLA match, panel reactive antibody (PRA), immunosuppressive regimen, and delayed graft function.

We also identified a second cohort of patients in whom all solitary adult first renal transplants between 1988 and 1998 were included. In this cohort of patients, we estimated differences in graft survival by Kaplan-Meier methods and calculated adjusted graft survival rates from a Cox proportional hazards model, which adjusted for the covariates and for the donor demographics. To evaluate the relative impact of waiting time on dialysis versus the affect of living versus cadaveric transplantation, we introduced an interaction term between ESRD time and transplant modality in the Cox model.

In addition, we retrospectively analyzed 77,469 patients with ESRD who had been on the cadaveric renal transplant waiting list for at least 2 years between 1988 and 1998. We used a Cox nonproportional hazards model that used time to transplantation as the time-dependent covariate to estimate the risk for death associated with cadaveric renal transplantation compared to remaining on the waiting list (7).

A probability of type 1 error less than 0.05 was considered the threshold of statistical significance. For multiple comparisons, we used Bonferroni methods to assess statistical significance. Statistical analysis was performed with SAS version 8.2 and SPSS version 11.0.

Back to Top | Article Outline

RESULTS

Table 1 displays the demographics of recipients of paired kidneys in the short versus long ESRD time group. Donor demographics were identical between the groups because of each kidney pair selected, one kidney went to the short ESRD time group and one kidney went to the long ESRD time group. Recipient age was significantly higher in the patients who had been on dialysis for more than 2 years. Peak percent PRA before transplantation was significantly higher in patients in the long ESRD time group, but HLA matching was not significantly different between the groups. Cold ischemia time was virtually identical between the groups. Recipient gender distribution was similar, whereas African American recipients were observed more frequently in the long ESRD time group. Immunosuppressive therapy was equally distributed between the groups. Acute rejection and delayed graft function were both significantly more frequent in the long ESRD time group.

Table 1

Table 1

By Kaplan-Meier analysis, 5- and 10-year graft survival rates for paired kidneys (Fig. 1) were significantly worse in the patients who had undergone more than 24 months of dialysis (58% and 29%, respectively) compared to the patients who had been on dialysis for less than 6 months before transplantation (78% and 63%, respectively, P <0.001 each). The 5- and 10-year unadjusted death-censored graft survival rates for paired kidneys were 86% and 77%, respectively, in patients who had been transplanted early compared to 77% and 57%, respectively, in patients who were transplanted late (P <0.001 each).

Figure 1

Figure 1

Five- and 10-year unadjusted overall patient survival for paired kidneys was 89% and 76%, respectively, in the group on dialysis less than 6 months compared to 76% and 43%, respectively, in the group on dialysis for more than 2 years (P <0.001 each). The 5- and 10-year adjusted graft survival for paired kidneys was 78% and 60%, respectively, in the short ESRD time group and 65% and 41%, respectively, in the long ESRD time group, and the relative risk for graft loss in the long ESRD time group was 1.73 (confidence interval 1.54–1.95, P <0.001).

The unadjusted graft survival of all cadaveric transplants between 1988 and 1998 is displayed in Figure 2. The 10-year overall unadjusted graft survival was 71% in the preemptive group, 49% in the 0 to 6 month dialysis group, 43% in the 6 to 12 month dialysis group, and 38% in the 12 to 24 month dialysis group and 35% in the patient group who had been on dialysis for more than 24 months.

Figure 2

Figure 2

For all living donated kidneys, the 10-year overall unadjusted graft survival (Fig. 3) was 78% in the preemptive group, 62% in the 0 to 6 month dialysis group, 55% in the 6 to 12 month dialysis group, and 50% in the 12 to 24 month dialysis group and 48% in the patient group who had been on dialysis for more than 24 months.

Figure 3

Figure 3

Relative risks for graft loss and 10-year adjusted graft survival rates in all cadaveric versus living transplants by ESRD time are displayed in Table 2. Preemptive cadaveric transplants were assigned the reference group in the interaction model to evaluate the relative impact of waiting time on cadaveric versus living transplants. Only living preemptive transplants did significantly better than preemptive cadaveric transplants (10-year adjusted graft survival rate of 75% vs. 69%, P <0.001). All other categories did significantly worse. Living transplants performed on patients who had been on dialysis up to 6 months were associated with a significantly higher risk of graft loss (relative risk=1.4, P <0.001) than preemptive cadaveric transplants with a projected 10-year graft survival of 62% versus 69%, respectively. Cadaveric transplants performed after more than 2 years of maintenance dialysis had the worst projected 10-year graft survival of 39%.

Table 2

Table 2

The results of the Cox nonproportional hazards model that investigated the relative benefit of transplantation versus dialysis in patients on dialysis for at least 2 years is displayed in Figure 4. Of the 77,469 patients still on the waiting list after 2 years, 15,414 eventually underwent cadaveric renal transplantation whereas 61,055 remained on the waiting list until the study ended in June 1999. After 5 years, cadaveric renal transplantation was associated with a relative risk of 0.58 (P <0.001) compared to patients who remained on the cadaveric renal transplant waiting list. The evolution of the risk over time after cadaveric renal transplantation is displayed in Figure 4.

Figure 4

Figure 4

Back to Top | Article Outline

DISCUSSION

Our study demonstrates that waiting time on dialysis before transplantation is quantitatively one of the largest independent modifiable risk factors for graft loss after kidney transplantation. By analyzing pairs of donor kidneys that were transplanted in a recipient with short ESRD time and a recipient with long ESRD time, we can effectively exclude that part of the elevated risk for graft loss in the recipients who had undergone dialysis for a prolonged time was a result of donor characteristics not readily available from the database.

Because of the national donor policy to share six-antigen–matched kidneys regardless of waiting time and across organ procurement organizations, more six-antigen matches can be found in preemptive cadaveric transplants. To prevent this potential bias in analyzing graft survival in the paired-kidney analysis, we excluded all kidney pairs of which any kidney went to a six-antigen–matched recipient. After this adjustment, the distribution of HLA matches was almost identical between recipients who received transplants early and those who received transplants late.

Although we excluded a donor selection bias, it is conceivable that the worse unadjusted graft survival in the long ESRD time group was to a certain degree a result of higher risk recipients. On the other hand, higher PRA and more advanced recipient age are probably intrinsic characteristics of the patients with prolonged waiting time. When adjusting in the multivariate analysis for these risk factors, including African American recipients, we still observed an absolute difference of 12% worse graft survival in the long ESRD time recipients at 5 years and 19% worse graft survival at 10 years. This translates into a 15% relative difference in graft survival at 5 years and an overwhelming 32% relative difference at 10 years. These numbers quantify the real affect of length of ESRD time on graft survival and make ESRD time the largest potentially modifiable risk factor for renal transplant outcomes.

The magnitude of the impact of ESRD time on outcomes is also reflected by the multivariate model including all patients, showing a 44% worse 10-year graft survival in cadaveric renal transplant recipients on dialysis for more than 2 years. Even in living donated kidneys, in which a potential donor selection bias is less likely, the overall adjusted 10-year graft survival rate was 35% worse in recipients who had been on dialysis for prolonged periods of time.

Note that the beneficial effect of a living transplant compared to a cadaveric transplant gradually fades when living transplants are performed after the patients have spent prolonged times on dialysis. By analyzing ESRD time versus transplant modality with an interaction term in the multivariate analysis, we were able to evaluate the relative benefit of living transplantation versus waiting time on dialysis. The 10-year–adjusted living graft survivals for transplants after more than 2 years of dialysis are similar to 10-year–adjusted cadaveric graft survival for transplants performed within the first 6 months of dialysis initiation. In fact, much of the overall beneficial effects of living donation on graft survival shown in literature (8,9) seem to be attributable to the on average shorter ESRD times in these patients. At any given wait time, living donor recipients still have better graft survival rates than cadaveric donor recipients; however, this effect is smaller than the affect of waiting time.

Of the basis of this data, waiting time on dialysis for a kidney transplant should be considered when determining the optimal choice of transplant type for a patient with near ESRD. Also, on the basis of this data, a cadaveric kidney transplant with an average waiting time of 2 years (U.S. average) yields a 48% worse 10-year graft survival compared to a preemptive living transplant. Obviously, waiting times vary widely across the United States, and pertinent information in regard to the locally expected waiting time and the resulting adjusted 10-year graft survival rates in living versus cadaveric transplantation can be obtained from Table 2.

Despite the worse outcomes after transplantation in patients who received transplants after prolonged times on dialysis, the survival advantage of transplantation over dialysis was maintained even in the patients who had been on dialysis for more than 2 years. This suggests that whatever ongoing damage occurs to patients while they are on dialysis may be halted after transplantation. In fact, the relative long-term mortality benefit of transplantation over dialysis in this cohort of patients with ESRD times more than 2 years was similar to the survival benefit shown for the overall cohort of transplant recipients published by Wolfe et al. (7).

The reason that an increased waiting time on dialysis is associated with decreased graft and patient survival can not be discerned from the data that we have presented. One possible explanation may be that, while dialysis is clearly a life-saving therapy, it is a less-than-perfect renal replacement modality and, thus, the longer patients wait on dialysis for a transplant the longer patients are exposed to the chronic effects of end-stage renal failure and dialysis. It is well documented that patients on dialysis have alterations in the concentration of a number of substances (e.g., homocysteine, advanced glycosylation end products, and lipoproteins) that may predispose these patients to both cardiovascular and renal allograft vascular damages (10–16). In addition, the poor nutrition, chronic inflammatory state, altered immunologic function, and inadequate clearance that often accompanies patients with ESRD on dialysis (17, 18) may predispose these patients to poorer tolerance to the immunosuppressive agents after transplantation. Therefore, patients on long-term dialysis may be at a disadvantaged state when they finally receive their transplant.

Back to Top | Article Outline

CONCLUSION

Transplant waiting time on dialysis is one of the strongest independent modifiable risk factors for renal transplant outcomes. A large part of the advantage of living versus cadaveric transplantation may also be explained by this phenomenon. This effect is dominant enough that a cadaveric renal transplant recipient with ESRD time less than 6 months has the equivalent graft survival as living-donor transplant recipients who wait on dialysis for more than 2 years.

Organ allocation models geared toward improving outcomes in patients with ESRD will have to take into account that changes in average waiting time are a major factor in determining posttransplant graft and patient survival. Because waiting times are increasing as a result of the widening gap between the increase in the demand for organs and the increase in organ donations, improvements in cadaveric graft survival seen over the past decade may be difficult to match in the coming decade.

Back to Top | Article Outline

Acknowledgment.

The authors thank Suzanne C. Johnson who has helped with the editing and review of the paper.

Back to Top | Article Outline

REFERENCES

1. Roake JA, Cahill AP, Gray CM, et al. Preemptive cadaveric renal transplantation: clinical outcome. Transplantation 1996; 62: 1411–1416.
2. Asderakis A, Augustine T, Dyer P, et al. Pre-emptive kidney transplantation: the attractive alternative. Nephrol Dial Transplant 1998; 13: 1799–1803.
3. Cosio FG, Alamir A, Yim S, et al. Patient survival after renal transplantation, I: the impact of dialysis pre-transplant. Kidney Int 1998; 53: 767–772.
4. Meier-Kriesche HU, Port FK, Ojo AO, et al. Effect of waiting time on renal transplant outcome. Kidney Int 2000; 58: 1311–1317.
5. Mange KC, Joffe MM, Feldman HI. Effect of the use or nonuse of long-term dialysis on the subsequent survival of renal transplants from living donors. N Engl J Med 2001; 344: 726–731.
6. Mange KC, Cherikh WS, Maghirang J, et al. A comparison of the survival of shipped and locally transplanted cadaveric renal allografts. N Engl J Med 2001; 345: 1237–1242.
7. Wolfe RA, Ashby VB, Milford EL, et al. Comparison of mortality in all patients on dialysis, patients on dialysis awaiting transplantation, and recipients of a first cadaveric transplant. N Engl J Med 1999; 341: 1725–1730.
8. Cecka JM. The UNOS Scientific Renal Transplant Registry: 2000. Clin Transpl 2000:1–18.
9. Ojo AO, Port FK, Mauger EA, et al. Relative impact of donor type on renal allograft survival in black and white recipients. Am J Kidney Dis 1995; 25: 623–628.
10. Zimmermann J, Herrlinger S, Pruy A, et al. Inflammation enhances cardiovascular risk and mortality in hemodialysis patients. Kidney Int 1999; 55: 648–658.
11. Lowrie EG. Acute-phase inflammatory process contributes to malnutrition, anemia, and possibly other abnormalities in dialysis patients. Am J Kidney Dis 1998; 32: S105–S112.
12. Wanner C, Zimmermann J, Quaschning T, et al. Inflammation, dyslipidemia and vascular risk factors in hemodialysis patients. Kidney Int Suppl 1997; 62: S53–S55.
13. Gris JC, Branger B, Vecina F, et al. Increased cardiovascular risk factors and features of endothelial activation and dysfunction in dialyzed uremic patients. Kidney Int 1994; 46: 807–813.
14. Degenhardt TP, Grass L, Reddy S, et al. The serum concentration of the advanced glycation end-product N epsilon-(carboxymethyl)lysine is increased in uremia. Kidney Int 1997; 52: 1064–1067.
15. Hricik DE, Wu YC, Schulak A, et al. Disparate changes in plasma and tissue pentosidine levels after kidney and kidney-pancreas transplantation. Clin Transplant 1996; 10: 568–573.
16. Friedlander MA, Witko-Sarsat V, Nguyen AT, et al. The advanced glycation endproduct pentosidine and monocyte activation in uremia. Clin Nephrol 1996; 45: 379–382.
17. Kaufmann P, Smolle KH, Horina JH, et al. Impact of long-term hemodialysis on nutritional status in patients with end-stage renal failure. Clin Invest Med 1994; 72: 754–761.
18. Descamps-Latscha B, Herbelin A, Nguyen AT, et al. Immune system dysregulation in uremia. Semin Nephrol 1994; 14: 253–260.
© 2002 Lippincott Williams & Wilkins, Inc.