Renal failure in patients with cirrhosis awaiting liver transplantation (LT) is common, with up to 8% requiring dialysis and a further 30% having varying degrees of impairment (1–4). Renal dysfunction post-LT is now one of the major healthcare issues in this cohort, and approximately 18% of LT recipients may develop an estimated glomerular filtration rate (eGFR) of less than 29 mL/min by 5 years posttransplantation (5). It has also been associated with reduced patient and liver graft survival (4–7).
There are a number of therapeutic options for prospective LT recipients with chronic kidney disease, (CKD) and a combined liver and kidney transplantation (CLKT) may be appropriate in selected cases. In the United States, numbers of CLKT recipients have increased 3-fold since 2002, partly attributed to the introduction of the model for end-stage liver disease (MELD) score (4, 8, 9). Meanwhile, the waiting list for a renal transplant alone continues to grow (10).
To optimize patient selection for CLKT, a consensus conference in 2008 proposed that those who met the following CLKT selection criteria should be considered; end-stage renal disease patients with confirmed cirrhosis and portal hypertension and end-stage liver disease patients with a GFR chronically less than 30 mL/min, preferably with a renal biopsy confirming more than 30% glomerulosclerosis and interstitial fibrosis (8). Those with acute kidney injury (AKI) and a minimum of 8 weeks dialysis dependency and a creatinine level more than 176.8 μmol/L should also be considered (4).
Despite this, it has been shown that prospective CLKT recipients are less likely to be on dialysis and have better eGFRs than those awaiting kidney transplants alone, raising concern about the severity of renal impairment in some CLKT candidates (4, 8). This is especially so in those with hepatorenal syndrome, because renal function usually recovers post-LT alone (11, 12). A CLKT in these individuals could potentially results in a situation where patients have three functioning kidneys. In addition, it has been demonstrated that patient survival is lower in those receiving a CLKT compared with those receiving a kidney after LT although both are inferior to a kidney allograft alone. This is believed to be because of higher perioperative (LT) mortality (4, 13–15). Not only is cardiorespiratory reserve significant, but so too is the ability of the immune system to fight infection. This is because sepsis is one of the most common modes of death post LT especially in those with renal insufficiency. Hence, a CLKT should not be undertaken lightly.
Despite these reservations, there are several benefits to CLKT over other treatment modalities in those with CKD requiring LT. For those on dialysis pre-LT, CLKT has been shown to confer a survival advantage over LT alone (4, 6). Immunologic advantages may also be conferred by the LT as less acute and chronic renal transplant rejection has been reported after CLKT compared with kidney transplantation alone. A longer renal half-life has also been noted after CLKT compared with kidney transplant after LT (15).
Hence, there are pros and cons to CLKT and a method to accurately identify patients, not on dialysis but still at risk of end-stage renal failure in the year post-LT, would be of definite benefit. Therefore, the first aim of this study was help to forecast future renal function for waitlisted LT patients by determining the average change in eGFR from the time of transplant assessment (TA) to that at 1-year post-LT. The second aim was to determine the clinical features at the TA and pre-LT that were predictive of a decline in eGFR to less than 30 mL/min by 1-year post-LT with the purpose of using these variables to develop a risk score.
PATIENTS AND METHODS
Patients with chronic liver disease who had undergone assessment for a LT at King's College Hospital, London were identified from a prospectively compiled database. Clinical information along with laboratory and radiologic results were reviewed. Patients who were assessed and subsequently transplanted from April 2000 to December 2006 were used as the training dataset to identify the predictive variables and develop a risk score. Those being assessed for CLKT and those younger than 16 years or with acute liver failure were excluded from the training dataset. Data on a further similar subset of patients transplanted between January 2007 and July 2008 were used for external validation. Between April 2000 and July 2008, a small number of patients underwent a CLKT, and the score was further examined in this group of patients. Patients receiving a CLKT for familial amyloidosis without evidence of chronic liver disease were excluded, as were those with missing data.
The following parameters were evaluated as the potential predictors of decline in renal function at the time of TA: age, gender, race, cause of end-stage liver disease, any previous or subsequent LTs, a history of type 1 or 2 diabetes, hypertension (patients were deemed to be hypertensive if they reported this, had a documented history of hypertension in their medical records or letters of referral, were on antihypertensive medications, or had a recorded blood pressure more than 140/90 mm Hg on more than one occasion), a history of cardiovascular disease, smoking history and current medication record including antihypertensive agents, and the use of potentially nephrotoxic agents such as nonsteroidal antiinflammatory drugs, calcineurin inhibitors, and aminosalicylates (16, 17). The findings on clinical examination, such as the systolic and diastolic blood pressure, body mass index, the presence of ascites or splenomegaly, or a history of varices, were noted. Liver function was assessed by evaluating the international normalized ratio, bilirubin, albumin, MELD, and Child Pugh scores (9, 18). The MELD score was also calculated on the day of LT.
Renal parameters, such as sodium, urinary protein excretion per 24 hr, and radiologic evidence of abnormal kidneys, were also determined. Renal function was evaluated at the TA using a stable serum creatinine or a peak creatinine if the patient was experiencing a deterioration in renal function. Renal function was also assessed on the day before LT and at 1-year post-LT using creatinine and the eGFR (calculated using the modification of diet in renal disease study equation), and the patients were stratified according to the CKD stage (19, 20). AKI (as defined by the AKI Network) developing between the time of the TA and the LT was noted (21). The underlying cause of AKI or an eGFR chronically (>3 months) less than 60 mL/min before LT was determined by reviewing the detailed letters and discharge summaries along with laboratory, microbiologic, and radiologic evidence. Histology was rarely available. The various diagnoses were separated into acute or chronic causes (19, 21, 22). Hepatorenal syndrome was defined using the latest consensus definition (22). Any renal replacement therapy requirements before the LT or dialysis requirements at 1-year post-LT were recorded. The duration of AKI or eGFR chronically less than 60 mL/min before LT was recorded. In addition to evaluating potential predictors of decline in renal function, the average changes in serum creatinine and eGFR from the time of TA to before the LT and at 1-year post-LT were determined. Finally, time on the waiting list was also evaluated.
Post-LT variables such as AKI and liver function were not evaluated, because we specifically wanted to focus on the variables available to clinicians when the decision is being made whether to list the patient for a CLKT. In addition, calcineurin inhibitor dosing and exposure was not assessed because of calcineurin inhibitor sparing protocols used in patients with impaired renal function.
To create a risk score, methods similar to those used previously by other authors was used (23, 24). By using the training dataset, a model was created to identify the variables that predicted a decline in renal function to less than 30 mL/min by 1-year post-LT using univariate binary logistic regression analysis. Factors were deemed significant if the P value was less than 0.05. The significant factors were next examined by stepwise canonical discriminant function analysis using the Wilks' lambda method. Herein, patients were stratified based on the result of a discriminant function score. The discriminatory ability of the score was assessed by the Wilks' lambda and Chi-square test (25). Correlation between the variables that comprised the score was checked using the Pearson correlation coefficient. The accuracy of the score was examined using the area under the receiver operating characteristic curve (ROC; AUC). A risk score cut point was determined using the point on the ROC curve nearest the 0, 1 point (26). Sensitivity, specificity, positive predictive values (PPV) and negative predictive values (NPV) were determined for the risk score and also for levels above and below the cut point. The predicted and actual renal outcomes post-LT were compared using the Kappa statistic, where a value of 0.6 to 0.8 indicates substantial agreement, and a value of 0 indicates poor agreement (27). Finally, the risk score was validated internally using leave-one-out cross validation, where each patient was used for validation in turn and the remainder as the training dataset (25). It was also validated externally using an independent validation subset of patients (28). Continuous variables are expressed as means±standard deviations or standard error (SE) and categorical variables as numbers or percentages. The Student's t test was used to compare continuous variables, and the Chi-square test or two-tailed Fisher's exact test were used to compare categorical variables. Statistical analysis was preformed using SPSS, version 14.0 (Chicago, IL).
Demographic and Clinical Characteristics
From April 2000 to December 2006, 513 patients with chronic liver disease underwent a TA and a subsequent LT. Of these, 368 patients were followed up for 1 year (training dataset). The remainder either died within the year (8%) or were followed up locally (20%) and were excluded. The causes of end-stage liver disease are outlined in Figure 1, and details on the clinical characteristics of the patients at the time of TA and before LT are illustrated in Table 1. From January 2007, a further 212 consecutive adult patients were identified (validation dataset), of which 149 (70%) were followed up for 1 year. The remainder were excluded because they died, were followed up locally, or data were missing. There were 44 (30%) women and 105 (70%) men with a mean age of 51.5±12.2 years, not significantly different from the original training dataset. Twenty-one patients were assessed for and subsequently underwent a CLKT during the time period studied, and the score was evaluated in 14. The rest were excluded because the indication for CLKT was familial amyloidosis without evidence of chronic liver disease. There were 50% women and men with a mean age of 57.6±12.0 years. The most common hepatic indications for CLKT were alcoholic liver disease (n=3) and polycystic liver disease (n=3). Other indications were cystic fibrosis, Budd Chiari, cryptogenic cirrhosis, hepatitis C, and schistosomiasis infection. The most common renal indication was polycystic kidney disease (n=3) and biopsy-proven membranoproliferative glomerulonephritis (n=3). Other indications were presumed diabetic, hypertensive, calcineurin inhibitor, and reflux nephropathy.
Before the LT, 57 (15%) patients had an eGFR less than 60 mL/min for more than 3 months (Fig. 2). The principle chronic etiologies included the following clinical diagnoses: diabetic nephropathy, type 2 hepatorenal syndrome, hypertensive nephropathy, calcineurin inhibitor nephropathy, and biopsy-proven glomerulonephritis (membranoporliferative, membranous, and IgA). Between the TA and the LT, 79 (21%) patients developed AKI of varying degrees of severity (stage 1, n=57; stage 2, n=10; and stage 3, n=12). The most common acute clinical diagnoses included prerenal impairment because of diuretics or hemorrhage, sepsis, hepatorenal syndrome, and one biopsy-proven diagnosis of acute tubular necrosis.
Overall, there was evidence of deterioration in renal function from the time of TA to before the LT and at 1-year post-LT, with an increase in mean serum creatinine from 97.2±30.4 to 106.5±53.5 (P<0.005) and 110.3±45.1 μmol/L (P<0.001), respectively. There were corresponding decreases in eGFR from 86.1±25.6 to 74.9±22.9 mL/min and 68.6±24.6 mL/min (P<0.001). Further detail on the eGFR measurements at the three stages is displayed in Figure 2. There was a greater proportion of LT recipients with an eGFR less than 60 mL/min before LT and at 1 year compared with at TA (P<0.001). The numbers of patients with an eGFR less than 30 mL/min remained relatively static between the TA and the 1-year post-LT although there was a temporary increase in those with an eGFR less than 15 mL/min before LT. The majority (82%, n=14) of those who had an eGFR less than 30 mL/min at the TA or before the LT experienced an improvement in renal function by 1-year post-LT and were correctly felt to have reversible causes for the deterioration in their renal function such as sepsis, acute tubular necrosis, or hepatorenal syndrome. All patients who required a period of renal replacement therapy before LT managed to recover renal function sufficiently to become dialysis independent. All those with an eGFR less than 30 mL/min at 1 year (n=7) had stage 2 or 3 CKD at the TA.
Risk Factor Identification, Risk Score Development, and Validation
A number of pre-LT factors were found to be significant predictors by univariate analysis, and these are identified in Table 2. The ultrasound abnormalities noted were reduced renal size, increased echogenicity, calculi, a duplex collecting system, renal cysts, hydronephrosis, an angiomyolipoma, a horseshoe kidney, nephrocalcinosis, and reduced cortical thickness.
The significant variables were next analyzed using stepwise discriminant analysis to generate a risk score. The discriminant function determined to stratify patients at the TA into likely or unlikely to progress to an eGFR less than 30 mL/min at 1-year post-LT, is illustrated in Figure 3. The Wilks' lambda value was 0.55, indicating a good discrimination between the two groups based on this model. There was a significant difference in scores between the two groups with P less than 0.001 (Fig. 3).
The accuracy of the discriminant function was determined using ROC analysis, and the AUC was 0.996 with a SE of 0.003 and 95% confidence interval of 0.988 to 1 (Fig. 4). The point on the ROC curve nearest 0, 1 was used as the cut point for the score, as this is where sensitivity and specificity are optimal and this equated to a score of 2.16. The sensitivity, specificity, PPV, and NPV for the risk score pre and post leave-one-out cross validation and for the score using the cutpoint at 2.16, were determined (Fig. 4). Sensitivity and PPV were inferior to specificity and NPV with the best results being achieved by stratifying patients as above or below the cut point. The classification yielded a Kappa of 0.82 (SE 0.10), indicating a substantial agreement between predicted and actual renal outcomes. Ten patients had a score of greater than 2.16. The seven patients in the original dataset who had an eGFR less than 30 mL/min at 1-year post-LT would have been predicted accurately by this model. The score incorrectly predicted that another three patients would have an eGFR less than 30 mL/min at 1 year. Two of these were felt to have reversible renal dysfunction at the time of TA having AKI presumed because of sepsis and type 1 hepatorenal syndrome. Both of these patients required renal replacement therapy before transplantation but for less than 3 months duration. Therefore, before applying the score, it would be important to exclude acute or potentially reversible conditions such as hepatorenal syndrome to improve the accuracy of the score. The third patient with a score more than 2.16 had presumed calcineurin inhibitor nephropathy with proteinuria and hypertension and an eGFR less than 60 mL/min for more than 1 year.
In the validation cohort, the mean eGFR at 1 year was 70.1±20.3 mL/min, similar to the training dataset. None of the patients in the validation dataset experienced a deterioration in eGFR to less than 30 mL/min by 1-year post-LT, and so ROC analysis could not be performed or sensitivity assessed. With the exception of one patient, all patients were correctly classified as unlikely to progress to stage 4 or 5 CKD with a mean score of −0.26±0.7. This gave a specificity of 99.3% (95.7%–100%). The one patient who was incorrectly identified as likely to progress had an eGFR of less than 60 mL/min for 8 years before LT with a long history of calcineurin inhibitor use.
The risk score was also applied retrospectively to 14 CLKT recipients. Twelve patients (85.7%) had a score more than 2.16 with a mean of 7.8±3.1 The two patients with scores less than 2.16 (1.3±0.4) were felt to have calcineurin inhibitor nephrotoxicy and diabetic nephropathy, both patients being unable to undergo renal biopsy because of bleeding risk. However, the presence of diuretic resistant ascites, relative hypotension, and proteinuria less than 0.5 g/24 hr in both of these patients suggests that hepatorenal syndrome may have played a role. Both patients had an eGFR less than 60 mL/min for less than a year.
The first CLKT was reported in 1984, and since then, the number being performed has increased yearly (6, 29). For some individuals, a CLKT is the most appropriate therapeutic option, but despite the publication of helpful selection criteria, the decision is not always clear cut (4). This study was undertaken to provide further information for the patient selection for CLKT. Deterioration in renal function in the year post-LT was evaluated, and a risk score to predict those likely to experience a deterioration in eGFR to less than 30 mL/min during the year post-LT was developed.
Overall, there was an 18% decrease in the proportion of patients with normal renal function and a 13% increase in those with an eGFR less than 60 mL/min from the time of TA to 1-year post-LT. Before LT, the percentage of patients with an eGFR less than 60 mL/min (16%) was lower than previously reported. Nair et al. (1) showed that 30% of the 19,261 individuals studied had an eGFR less than 70 mL/min before LT, but a different eGFR prediction equation was used. With regard to renal function at 1 year, Gonwa et al. (30) demonstrated that 3% of LT recipients had a creatinine level of more than 221 μmol/L or were on dialysis. This figure is similar to that seen in our cohort, where 2% of individuals had an eGFR less than 30 mL/min at 1 year, but different definitions were used. Similarly, Scientific Registry of Transplant Recipients data shows that only 2.4% of patients listed for LT were on dialysis or listed for a renal transplant by 1 year post-LT (4).
The average increment in serum creatinine and decrease in eGFR during the time period studied were 13.2±43 μmol/L and 11.2±23.5 mL/min, respectively. Cohen et al. (31) illustrated a greater decrease in GFR of approximately 20 mL/min in 353 patients during 1-year post-LT using iothalamate clearances but did not show a link between pre-LT renal dysfunction and post-LT CKD. In our study, there was a wide variation in the mean eGFR changes. Some patients remained stable and others progressed to dialysis by 1-year post-LT, suggesting that pre-LT renal function alone does not necessarily predict post-LT function. Risk or prediction scores are increasingly being used in clinical practice to guide treatment and resource use and to help to determine prognosis. The key variables in our discriminatory prediction score were serum creatinine, history of hypertension, the degree of proteinuria (all determined at the time of TA), and the duration of renal impairment before LT. Using a cut point of 2.16, this was shown to be accurate at correctly identifying patients not likely to experience a decline in eGFR to less than 30 mL/min as it had a better specificity and NPV than sensitivity and PPV. Therefore, this score is probably more powerful when deciding who does not require a CLKT. Two of the three inaccurately stratified patients had AKI, presumed because of sepsis and hepatorenal syndrome, emphasizing the point that those with AKI should be identified before applying the score, thereby avoiding inappropriate organ allocation. A previous risk score to predict creatinine at 1-year post-LT included the variables pre-LT creatinine, bilirubin, and duration of pre transplant renal dysfunction. However, there were only 67 patients in that unvalidated study (32). Although we do appreciate that post-LT variables will affect the renal outcome, we chose to focus on the variables pre-LT as this is the only information available to clinicians when the decision regarding CLKT is being made. Other investigators have identified increasing age, female gender, diabetes, and pre-LT renal function as the predictors of post-LT CKD. Proteinuria has also been linked and is a strong indicator of intrinsic renal disease; however, unusually, pre-LT hypertension has not previously been shown to be an independent risk factor (31, 33–37).
Published selection criteria suggest that those with a GFR chronically less than 30 mL/min should be considered for CLKT (4). However, the decision can occasionally be difficult in those with a GFR marginally above this threshold. A high score in these patients would provide weight to a decision to list an individual for CLKT. Conversely, a low score in an individual with an eGFR just less than 30 mL/min suggests that the patient may not require a CLKT and may in fact have a reversible cause for renal impairment such as hepatorenal syndrome. The score may be particularly informative in those where it is not possible to perform a renal biopsy. It may also have uses independent of CLKT patient selection by prompting a switch to calcineurin inhibitor sparing immunosuppression protocols in those with scores more than 2.16.
There are some limitations to this study. Although a prospectively compiled database was used, this was reviewed retrospectively. The causes of AKI and CKD were determined retrospectively, which is not optimal. Ideally, the risk score would have predicted those who become dialysis dependant within the year post-LT, but the numbers of patients were too small so instead an eGFR less than 30 mL/min was chosen as it was believed that similar risk factors for deterioration in renal function would apply. The numbers of individuals developing CKD stages 4 and 5 in the training dataset were small, impacting on the accuracy of the score. None of the patients in the validation dataset developed an eGFR less than 30 mL/min, so sensitivity could not be established. This may be because of improved guidance in the literature regarding patient selection for CLKT or the increased use of calcineurin inhibitor sparing regimens of immunosuppression in patients with renal impairment pre-LT. Nonetheless, it was reassuring to note that the majority of those who had a CLKT actually had a score more than 2.16. An eGFR rather than radionucleotide clearance was used as data on the latter was not available. However, the modification of diet in renal disease equation has been shown to overestimate true GFR as determined by iothalamate clearance both pre- and post-LT in those with a GFR less than 40 mL/min. This makes it less likely that a patient being considered for a CLKT would be allocated a scarce renal transplant based on a spuriously low eGFR (20).
In conclusion, the potential rate of decline in eGFR during the time period from the TA to 1-year post-LT has been determined. Pre-LT clinical factors that predict a decline in renal function to less than 30 mL/min have been identified and used to develop a validated accurate risk score for identification of those patients. This information can be used in conjunction with previously published selection criteria and clinical judgment for the selection of patients who might benefit from CLKT. It will also help to avoid inappropriate allocation of renal transplants to individuals likely to recover renal function post-LT.
1. Nair S, Verma S, Thuluvath PJ. Pretransplant renal
function predicts survival in patients undergoing orthotopic liver transplantation
2002; 35: 1179.
2. Davis CL, Gonwa TA, Wilkinson AH. Identification of patients best suited for combined liver
: Part II. Liver Transpl
2002; 8: 193.
3. Brown RS Jr, Lombardero M, Lake JR. Outcome of patients with renal
insufficiency undergoing liver
1996; 62: 1788.
4. Eason JD, Gonwa TA, Davis CL, et al. Proceedings of Consensus Conference on Simultaneous Liver
(SLK). Am J Transplant
2008; 8: 2243.
5. Ojo AO, Held PJ, Port FK, et al. Chronic renal
failure after transplantation
of a nonrenal organ. N Engl J Med
2003; 349: 931.
6. Gonwa TA, McBride MA, Anderson K, et al. Continued influence of preoperative renal
function on outcome of orthotopic liver
transplant (OLTX) in the US: Where will MELD lead us? Am J Transplant
2006; 6: 2651.
7. Markmann JF, Markmann JW, Markmann DA, et al. Preoperative factors associated with outcome and their impact on resource use in 1148 consecutive primary liver
2001; 72: 1113.
8. Davis CL, Feng S, Sung R, et al. Simultaneous liver
: Evaluation to decision making. Am J Transplant
2007; 7: 1702.
9. Kamath PS, Wiesner RH, Malinchoc M, et al. A model to predict survival in patients with end-stage liver
2001; 33: 464.
10. Norman DJ. The kidney transplant wait-list: allocation of patients to a limited supply of organs. Semin Dial
2005; 18: 456.
11. Davis CL, Gonwa TA, Wilkinson AH. Pathophysiology of renal
disease associated with liver
disorders: Implications for liver transplantation
. Part I. Liver Transpl
2002; 8: 91.
12. Marik PE, Wood K, Starzl TE. The course of type 1 hepato-renal
syndrome post liver transplantation
. Nephrol Dial Transplant
2006; 21: 478.
13. Fong TL, Bunnapradist S, Jordan SC, et al. Analysis of the United Network for Organ Sharing database comparing renal
allografts and patient survival in combined liver
with the contralateral allografts in kidney alone or kidney-pancreas transplantation
2003; 76: 348.
14. Ruiz R, Kunitake H, Wilkinson AH, et al. Long-term analysis of combined liver
and kidney transplantation
at a single center. Arch Surg
2006; 141: 735.
15. Simpson N, Cho YW, Cicciarelli JC, et al. Comparison of renal
allograft outcomes in combined liver
versus subsequent kidney transplantation
transplant recipients: Analysis of UNOS Database. Transplantation
2006; 82: 1298.
16. Chobanian AV, Bakris GL, Black HR, et al. The Seventh Report of the Joint National Committee on prevention, detection, evaluation, and treatment of high blood pressure: The JNC 7 report. JAMA
2003; 289: 2560.
17. Genuth S, Alberti KG, Bennett P, et al. Follow-up report on the diagnosis of diabetes mellitus. Diabetes Care
2003; 26: 3160.
18. Child CG. [Remote results of portal surgery in liver
cirrhosis]. Rev Int Hepatol
1964; 14: 287.
19. National Kidney Foundation. K/DOQI clinical practice guidelines for chronic kidney disease: Evaluation, classification, and stratification. Am J Kidney Dis
2002; 39(2 suppl 1): S1.
20. Gonwa TA, Jennings L, Mai ML, et al. Estimation of glomerular filtration rates before and after orthotopic liver transplantation
: Evaluation of current equations. Liver Transpl
2004; 10: 301.
21. Mehta RL, Kellum JA, Shah SV, et al. Acute Kidney Injury Network: Report of an initiative to improve outcomes in acute kidney injury. Crit Care
2007; 11: R31.
22. Salerno F, Gerbes A, Gines P, et al. Diagnosis, prevention and treatment of hepatorenal syndrome in cirrhosis. Gut
2007; 56: 1310.
23. van der Heijde DM, van 't Hof M, van Riel PL, et al. Development of a disease activity score based on judgment in clinical practice by rheumatologists. J Rheumatol
1993; 20: 579.
24. Zapata-Vazquez RE, Rodriguez-Carvajal LA, Sierra-Basto G, et al. Discriminant function of perinatal risk that predicts early neonatal morbidity: Its validity and reliability. Arch Med Res
2003; 34: 214.
25. Chan YH. Biostatistics 303. Discriminant analysis. Singapore Med J
2005; 46: 54.
26. Akobeng AK. Understanding diagnostic tests 3: Receiver operating characteristic curves. Acta Paediatr
2007; 96: 644.
27. Chmura Kraemer H, Periyakoil VS, Noda A. Kappa coefficients in medical research. Stat Med
2002; 21: 2109.
28. Harrell FE Jr, Lee KL, Mark DB. Multivariable prognostic models: Issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Stat Med
1996; 15: 361.
29. Margreiter R, Kramar R, Huber C, et al. Combined liver
and kidney transplantation
1984; 1: 1077.
30. Gonwa TA, Mai ML, Melton LB, et al. End-stage renal
disease (ESRD) after orthotopic liver transplantation
(OLTX) using calcineurin-based immunotherapy: Risk of development and treatment. Transplantation
2001; 72: 1934.
31. Cohen AJ, Stegall MD, Rosen CB, et al. Chronic renal
dysfunction late after liver transplantation
. Liver Transpl
2002; 8: 916.
32. Campbell MS, Kotlyar DS, Brensinger CM, et al. Renal
function after orthotopic liver transplantation
is predicted by duration of pretransplantation creatinine elevation. Liver Transpl
2005; 11: 1048.
33. Machicao VI, Srinivas TR, Hemming AW, et al. Impact of implementation of the MELD scoring system on the prevalence and incidence of chronic renal
disease following liver transplantation
. Liver Transpl
2006; 12: 754.
34. Pawarode A, Fine DM, Thuluvath PJ. Independent risk factors and natural history of renal
dysfunction in liver
transplant recipients. Liver Transpl
2003; 9: 741.
35. Corman SL, Coley KC, Schonder KS. Effect of long-term tacrolimus immunosuppression on renal
function in liver
transplant recipients. Pharmacotherapy
2006; 26: 1433.
36. Guitard J, Ribes D, Kamar N, et al. Predictive factors for chronic renal
failure one year after orthotopic liver transplantation
. Ren Fail
2006; 28: 419.
37. O'Riordan A, Wong V, McCormick PA, et al. Chronic kidney disease post-liver transplantation
. Nephrol Dial Transplant
2006; 21: 2630.