Systemic lupus erythematosus (SLE) is an autoimmune disease with relatively high morbidity and mortality rates. Approximately 10% of patients with SLE develop end-stage renal disease (ESRD) within 10 years of presentation.1 Although kidney transplantation is generally considered the preferable treatment for ESRD,2,3 in the early days of kidney transplantation, patients with SLE seldom received kidney transplantation due to assumed risk of recurrent lupus nephritis. As the Renal Transplant Registry reported a comparable survival rate of SLE transplants with that of non-SLE patients, kidney transplantation became widely accepted for patients with SLE.4 Nevertheless, it is noteworthy that some recent studies have shown that the outcome of kidney transplantation among patients with SLE was inferior to those of non-SLE patients.5–7 Stone et al.8 reported that the patients with SLE had more than a twofold risk of graft loss compared with the matched non-SLE control group. As this group of transplant recipients may be at higher risk of graft failure, prediction of the transplant outcome is important to identify modifiable predictive factors and to address them to optimize survival rates.
The identification of risk factors related to the outcome of kidney transplantation in patients with SLE and development of a model predicting outcome present a challenge. First, the incidence of SLE is relatively low, from 1.8 to 7.6 cases per 100,000 population.9 In addition, only a fraction of patients with SLE develop ESRD. Furthermore, because of the limited number of donors, only a fraction of ESRD population undergoes kidney transplantation. In a single clinical institution, typically, too few patients with SLE undergo kidney transplantation to warrant a meaningful study, and so we studied a large national dataset of kidney transplant recipients with SLE nephropathy.
This report represents the continuation of our work along the lines of building analytical tools to predict kidney transplant outcome. Some of our work has already been published in ASAIO Journal,10–12 and is, we believe, of possible interest to the biomedical engineering community. In general, decision support systems based on prediction models might further be integrated into some medical devices. In addition, analytical approaches similar to those described in this study can be adapted by biomedical engineering researchers to be used as part of technology evaluation.
As we described elsewhere,13 several steps are required to generate clinically useful prediction model. There should be a large enough set of retrospective data available to fit the model; the adequate mathematical model should be selected to perform the best for a given data (the step primarily addressed in this report); and the predictors of the outcome should be identified. The prediction formula then is generated by fitting the specific mathematical model into the “training dataset.” This formula then undergoes validation using the “testing” dataset and the specific metrics of model performance are generated and evaluated to establish the degree of clinical usefulness.
In this project, our goal was to develop prediction models of renal graft survival using different data mining methods and to compare the performance of those methods. Such a prediction tool may potentially be used in practice for risk stratification to identify recipients at greater risk of graft failure. In this group of patients with SLE nephropathy, who have inferior outcome as a group, it would be of very high clinical importance to identify individuals in whom graft survival is further compromised. Such a model should be used to apply extra efforts and greater attention to recipients at higher risk rather than denying them transplantation altogether.
Source of Data
The dataset used for this study was obtained from US Renal Data System (USRDS), which collects information about all ESRD patients in the United States. We identified 4,754 transplant recipients with SLE in the dataset who had renal transplantation during the period between July 1, 1985, and December 31, 2002.
The outcome of interest is the probability of 3-year allograft survival. To select the appropriate outcome, we considered several factors. The time point of a predictive model should reflect long-term (rather than short term) allograft survival; it should be considered an accepted standard in the renal transplant outcome literature and is should be methodologically feasible. Three-year allograft survival meets these criteria. It is considered a measure of long-term outcome; this outcome is used in retrospective analyses of USRDS data by other authors.14,15 Three-year follow-up is standard in clinical studies to measure long-term outcome, and 3-year graft survival is an accepted measure of success in transplant-related clinical trials and cost-effectiveness analyses.16 In our previous project, we used similar argument to predict 3-year allograft survival in binary format in the general renal transplant population.17 From a methodological perspective, periods of time longer than 3 years would be associated with much greater amount of censored data, significantly reducing the sample size available for the predictive modeling.
As the goal of this study was to predict 3-year graft survival for patients with SLE, only those patients who had information on their 3-year graft outcome were included in the study. Records without outcome censored due to loss of follow-up or study completion before 3-year follow-up were excluded. After applying these criteria, 3,313 recipients were included.
SAS version 9.1 and SAS Enterprise Miner version 4.1 (SAS Institute, Cary, NC) were used for data analysis and to build the predictive models. In this project, we used the large national transplant data registry, which includes hundreds of variables. To assist in adequate selection of the independent variables for the prediction models, we used Weka software.18,19 Weka is an well-respected open source machine learning software in the Data Mining and Knowledge Discovery fields.20–22
Data Preprocessing and Transformation
Duplicate records and outlier values believed to reflect erroneous data were deleted. If the age of recipient and donor is 18 years or older, the acceptable ranges of heights and weights were reasonably selected as 120–275 cm and 23–180 kg, respectively. The multiple imputation technique using SAS procedures PROC MI and PROC MIANALYZE were used to deal with missing continuous variables, such as cold storage time, donor age, donor weight, and height. The missing data were replaced with the mean value across the five datasets generated by PROC MI.
We further transformed variables as follows. Thirty-seven induction medications were grouped into four homogenous categories, based on the physiologic mechanism: 1) antithymocyte globulin and antilymphocyte globulin; 2) OKT3; 3) methylprednisone; and 4) anti interleukin 2 antibodies. Thirty-five maintenance medications were also grouped into six homogenous categories according to their physiologic mechanism: 1) cyclosporin, 2) prednisone, 3) tacrolimus, 4) mycophenolate mofetil, 5) azathioprine, and 6) Target of Rapamycin inhibitors. In addition, transforming comorbidity data, the cardiovascular diseases were grouped together, including cardiac arrest, myocardial infarction, unstable angina, cardiac dysrhythmia, congestive heart failure, ischemic heart disease, and peripheral vascular disease. A binary outcome variable, 3-year graft survival, was created based on the graft survival time.
Selection of the Predictors
The USRDS dataset includes hundreds of variables. It is not computationally feasible to include all these potential predictors in the classification models. After data preprocessing and transformation, we initially selected a subset of potential predictors known as clinically relevant to graft outcome based on current literature, including our previous report presenting factors associated with the outcome in transplant recipients with SLE.23 Subsequently, we used the software tool Weka v3.4 to automatically select predictors. Weka algorithms “bestfirst”24,25 and “geneticsearch”26 were used. “Bestfirst” considers the estimated best partial solution. The “bestfirst” algorithm selected the following six variables: recipient age, recipient race, maintenance regimen including prednisone, maintenance regimen including TOR inhibitor, predominate ESRD modality, and whether dialysis was required during first posttransplant week. The “geneticsearch” algorithm is a basic implementation of genetic search for learning Bayesian network structures. The “geneticsearch” selected eight variables. In addition to the six variables selected by “bestfirst,” “geneticsearch” also selected donor type and maintenance regimen including tacrolimus.
In addition to Weka selected attributes, we considered clinically relevant factors known or suspected to have significant effect on the kidney transplant graft survival based on the clinical literature. This approach was based on the clinical literature, domain expert opinion, and known plausible physiological mechanisms. We combined the Weka selected attributes with these additional variables. These additional variables included donor variables (cold storage time, age, gender, race, history of hypertension, cause of death, height, and weight); recipient variables (number of blood transfusions, peak value for panel-reactive antibody level, duration of ESRD, number of matched human leukocyte antigens between donor and recipient, ethnicity, number of previous pregnancies, the induction and maintenance medications as described above, height, weight, history of hypertension and diabetes, Charlson comorbidity index, total number of kidney transplantations,17,27,28 and pretransplant renal replacement therapy modality29), along with several others. Total of 38 attributes were selected for model building as indicated in Table 1.
Three data mining classification methods (classification trees, artificial neural networks, and logistic regression) were used to estimate the probability of the 3-year graft survival.
In the classification tree model, the split criterion was the χ2 test, and the significance level was set at <0.05. The size of the classification trees was chosen to minimize training and testing error. Generally, if the number of tree nodes is too small, the model performs poorly on both training and testing dataset (underfitting). Conversely, if the number of tree nodes is too large, the model performs well on the training dataset but poorly on the testing dataset (overfitting). In this study, overfitting was avoided by choosing a larger minimum number of observations in a leaf and the observations required for a split search. On the other hand, a minimum number of observations being too high for this relatively small dataset might be suboptimal, because of the early stopping and potential underfitting. The default combination (minimum number of observations, the number of observations required for splitting) setting for classification tree in SAS is (5, 18). Different combinations, (10, 40), (15, 50), and (20, 65) were tried when building the classification trees. The combination of (15,50) was chosen because it had the best performance with the largest number of observations to avoid overfitting.
In the neural networks, a feed-forward multilayer perceptron architecture was used. It included an input layer, a single hidden layer which calculates the sum of weighted input predictors, and an output layer which produces the predicted probability of class membership. We implemented a linear combination function and a hyperbolic tangent activation function.30 The model was parameterized to minimize error. The Levenberg-Marquardt training algorithm was used. The Levenberg-Marquardt algorithm is a mathematical solution to obtain least squares of the deviations of a curve.31 The advantage of this algorithm is in its ability to converge in most cases. The training process iterated until it met a stopping criterion (200 iterations or minimal average error). In designing the artificial neural networks, we tried different ways to connect the input layer to the output layer. The area under the receiver-operator characteristic curve (AUC) increased by 3% when the input layer was connected to both the hidden layer and the output layer. We iterated over hidden layer units ranging from 3 to 30; no improvement was produced by additional hidden layer units.
Logistic regression is commonly used to describe the relationship between a binomial outcome variable and predictors. In the logistic regression, all 38 preselected attributes were entered into the model. The SAS procedure DMREG was used.
We used the AUC to evaluate the models. AUC reflects the model's ability to discriminate between graft survival and failure. The estimated performance of the three data mining methods, as measured by AUC, was validated and compared using 10-fold cross-validation. The dataset was randomly partitioned into 10 subsets, one of which used as the testing dataset, and the remaining nine subsets were used as training sets. The process was iterated 10 times, so that 10 testing datasets and 10 training datasets were built (i.e., 10-fold cross-validation). Ten different models were generated for each of the three methods mentioned earlier. The average performance of the 10 models was considered to be an indicator of the performance of the method. The AUC and its 95% confidence interval (CI) were calculated by an application called “nonparametric comparison of AUC”32,33 provided by SAS. Both analysis of variance and Kruskal-Wallis were used to compare the results of the three methods. Paired comparison used Bonferroni adjustment.
Figure 1 shows the diagram of the classification tree based on the subset of variables selected by Weka supplemented with variables of known clinical relevance (total of 38 variables). The variable with the most discriminative power (the root node) was whether the recipients required dialysis within the first week after transplantation. The graft has an 84% chance of 3-year survival if the patient did not need dialysis during the first week versus a 58% chance of 3-year survival if dialysis was required during the first posttransplantation week. For the patients who needed dialysis during the first week posttransplantation, a graft has a better chance of survival if maintenance immunosuppressive regimen at the time of discharge included prednisone. For those who did take prednisone, grafts have lower probability of survival if their predominant renal replacement therapy modality before transplantation was hemodialysis, and their age was more than 50 years.
There is another category of recipients who did not need dialysis during the first week after transplantation but yet had decreased graft survival time. These recipients are African Americans who had hemodialysis as predominant renal replacement therapy modality and whose donor died from cerebrovascular disease or stroke. The probability of graft survival was greater if the patient's predominant renal replacement therapy was peritoneal dialysis or transplantation or if there was no predominant renal replacement therapy modality and the duration of ESRD was short. Figure 2 shows the diagram of the classification tree based on only the six variables selected by Weka. Similarly, those recipients who required first week dialysis, who were African American, or whose maintenance immunosuppressive regimen at the time of discharge did not include prednisone would have worse outcome. A major difference of the Weka-only variables (total of six variables) was that the predominant renal replacement therapy was split by kidney transplantation instead of hemodialysis.
Logistic Regression Model.
Table 2 lists the maximum likelihood estimates of predictors from the logistic regression based on the Weka feature selection supplemented with variables of known clinical relevance (total of 38 variables). The following variables had significant association with the graft survival: donor characteristics (age and gender), recipient characteristics (pretransplant dialysis modality; modality immediately before the transplantation), predominant ESRD modality over the pretransplant course, race, age, the need for dialysis during first posttransplant week, number of matched human leukocyte antigens, total number of transplants, peak panel reactive antibodies level, and history of previous pregnancy), and induction and maintenance medication regimen. For the model based on the six variables selected by Weka (Table 3), all variables had significant association with the graft survival except the need for dialysis during first posttransplant week.
Artificial Neural Network.
Figure 3 illustrates the structure of the artificial neural network. The hidden layer had three units and the input layer directly linked to both the hidden layer and the output layer. The interaction of a set of attributes in an artificial neural network is difficult to interpret. There are some posthoc methods to be used in interpretation of artificial neural networks. However, they are complex and time consuming. Sensitivity analyses can be used to describe the effect of an individual input variable, by applying a range of values to the input variable (while holding other input variables constant) and observing its influence on the outcome variable. Because of the availability of more easily interpretable models (logistic regression and the classification trees), we did not perform this analysis. We do, however, compare the overall classification performance of this model with the others in the next section.
Long List of Predictors.
The performance of the logistic regression (AUC: 0.74, 95% CI: 0.72–0.77) based on Weka feature selection supplemented with variables of known clinical relevance (38 variables) was significantly better (p < 0.05) than that of the classification trees (AUC: 0.70, 95% CI: 0.67–0.72). The difference, however, did not reach statistical significance (p = 0.218), when compared with the artificial neural networks (AUC: 0.71, 95% CI: 0.69–0.73). The performance of the artificial neural networks was not significantly better (p = 0.693) than that of the classification trees.
Short List of Predictors.
We also built prediction models using subsets of variables obtained with “bestfirst” and “geneticsearch” selection algorithms. As the AUC of artificial neural network based on six variables (AUC = 0.73) was slightly better than that on eight variables (AUC = 0.72), we built classification tree and logistic regression using the six “bestfirst” selected variables. The performance of the logistic regression based on six variables selected by Weka (AUC: 0.73, 95% CI: 0.71–0.75) did not significantly differ from that of either the classification trees (AUC: 0.70, 95% CI: 0.68–0.73) or the artificial neural networks (AUC: 0.73, 95% CI: 0.70–0.75). It seems that models based on short list of predictors (six variables) performed and the models based on the long list of predictors (38 variables).
Our group previously presented three reports in the ASAIO Journal related to the topic of outcome prediction in kidney transplantation.10–12 Our interest in developing tools to predict the outcome of kidney transplantation is based on the importance of the topic to clinical practice. Although there is a substantially better outcome of transplantation compared with dialysis, there is also a significant shortage of organs available for transplantation. Accurate prediction of kidney transplant outcome based on the individual characteristics of the patient would be an important step toward personalized medicine and potentially improved survival of the recipients and transplanted organs. Prediction of graft survival could potentially affect the pretransplant strategy, posttransplant care, and facilitate decision making for immunosuppressive medication and other potentially modifiable factors. It might help to select the optimal donor-recipient pairs, provide more attention to those with potentially inferior outcome, and help with adequate patient counseling. Another aspect of use of prediction modeling techniques is in education by modeling different clinical scenarios with the audience. Patients with predicted outcome being poor might require close follow-up and monitoring and lower threshold for clinic visit and diagnostic procedures, such as kidney biopsy. One has to note that this risk stratification approach is not meant to be used to deny organ transplantation to people who are at higher risk of graft failure but rather to identify them to receive extra attention.
Accurate prediction of the outcome is challenging due to the numerous variables affecting the outcome, unclear association with graft survival, and complex interactions among the predictors. In our previous ASAIO Journal publications, we addressed several aspects of the prediction modeling in kidney transplantation. The first step was to identify the role of some of the poorly studied factors in outcome. Among those, we evaluated the association of cardiovascular disease history with kidney transplant outcome.10 We proceeded to build a prediction model of graft survival in general population of kidney transplant recipients.11 This model was based on the national samples of data collected by the USRDS. In our most recent ASAIO Journal publication, we tested the performance of the models developed in the USRDS population by applying them to the local dataset at the University of Utah Health Care System.12
In the present report, using a relatively narrow group of patients with SLE nephropathy we asked the question—which mathematical model would perform the best. Although more powerful and sophisticated models are available, we hypothesized that use of more simple, intuitive, and user-friendly models might provide comparable performance. We applied three well-known data mining approaches (classification trees, logistic regression, and artificial neural networks) to the dataset of recipients of kidney transplant with ESRD caused by SLE. Although artificial neural networks are capable of modeling complex, nonlinear relationships, it is difficult to interpret results in terms of the effect of individual predictors. Logistic regression and classification trees are more intuitive, easier to explain to practitioners, and easier to implement.
We formatted and validated the dataset before analysis and applied several feature selection techniques to the data. As our goal was to predict graft survival as a binary variable, rather than a censored outcome, the dataset did not include censored data.
Identifying predictive factors out of hundreds of variables from a large national renal transplant registry data is challenging. We selected variables using objective criteria in combination with automated search algorithms, as well as opinion from domain experts. Our approach to selection went beyond simply using computer algorithms without any prior knowledge of the underlying mechanism and potential physiological association. We emphasize the models built using 38 predictors because of the higher discrimination attained using logistic regression. However, given the overlapping CIs, either subset of variables could be considered a reasonable choice. More parsimonious models that require only six input variables are more practical and are easier to implement broadly across different clinical settings.
Interestingly, the use of mycophenolate mofetil (MMF) has not been selected by the models to be a predictor of long-term graft survival. There are several potential explanations for it. It is possible that the use of MMF does not provide a long-term benefit in this selected group of patients with SLE nephropathy. The benefit of MMF in preventing acute rejection has been demonstrated in the early days of using the drug.34 However, for a while, the long-term benefit has been disputed. In a relatively small study of 80 patients, authors demonstrated the presence of beneficial effect of MMF on acute rejection incidence, although did not show the long-term positive effect of the drug.35 It might be a matter of statistical power as in a larger retrospective analysis, positive long-term effect of MMF has actually been demonstrated.36,37 In addition, recent systematic review indicated that literature does support positive effect of MMF on the rate of acute rejection and on graft loss.38 It is also possible that the use of MMF (although beneficial in comparison with azathioprine) has not been selected by feature selection algorithms because the association between MMF use and graft survival is better explained by other predictors. Specifically, the use of MMF is highly collinear with the transplant era (and in fact, the use of MMF has been included in our modeling as a proxy for transplant era). As the long-term graft survival has not changed dramatically over the time of the study period (despite better outcome with short-term indicators), MMF did not become a strong predictor of the outcome in this analysis. That might be true even though in head to head studies with azathioprine MMF has demonstrated to be advantageous.
We compared the performance of the models and demonstrated that among the models based on 38 predictors, the AUC of the logistic regression was noticeably higher than that of the other two models. It is interesting that despite higher complexity and flexibility, the artificial neural networks did not outperform the logistic regression and the classification trees in this study. This could be due to the relatively small size of the dataset, some limitation of the concepts represented in the dataset, or our use of a filter approach to feature subset selection. A larger amount of training data may improve the performance of the artificial neural networks. On the other hand, artificial neural networks do not always outperform logistic regression39,40 or classification trees.41 A methodology study reviewed 72 articles and found that 51% of the articles showed artificial neural networks were better than logistic regression, 7% of the articles showed logistic regression was better than artificial neural networks, and 42% of articles showed no difference between the artificial neural networks and logistic regression.42 Given the equivalent performance of the more readily interpretable logistic regression and classification tree models, those models may be more appropriate than the artificial neural network model for clinical decision support systems. Logistic regression would be preferable, as it showed high discrimination ability and because it scales well, implements easily, and runs quickly. The results of the classification trees are consistent with current medical knowledge. Recipients required dialysis during the first week had lower graft survival rate than those who demonstrated immediate graft function.43 Shorter duration of ESRD44 and a younger donor age17 is related to better recipient graft survival outcome. Predominant renal replacement therapy modality being hemodialysis29 or recipient race being African American17 is associated with worse graft survival outcome.
Some limitations of the study should be mentioned. The dataset that was used based on the data of kidney transplant procedures performed between 1985 and 2002. New transplant maintenance medications have been introduced during 1990s; therefore, the medications used for the patients in 1980s may be different from those used in the later years. In addition, some variables started to be collected by United Sharing Network for Organ Sharing (UNOS) only since 1994. Although more recent data are desirable and certain changes in medical approach to these patients have indeed taken place, we decided that they were not sizeable enough to introduce significant amount of heterogeneity in the data.
In conclusion, we generated several models predicting 3-year allograft survival in kidney transplant recipients with systemic lupus erythematosus. The performance of logistic regression and classification tree was not inferior to more complex artificial neural network. Models based on six predictors performed essentially the same as the models based on 38 predictors.
The data reported here have been supplied by the USRDS. The interpretation and reporting of these data are the responsibility of the authors and in no way should be seen as official policy or interpretation of the U.S. government.
1. Stone JH: End-stage renal disease in lupus: Disease activity, dialysis, and the outcome of transplantation. Lupus
7: 654–659, 1998.
2. Davis CL, Delmonico FL: Living-donor kidney transplantation: A review of the current practices for the live donor. J Am Soc Nephrol
16: 2098–2110, 2005.
3. Wolfe RA, Ashby VB, Milford EL, et al
: Comparison of mortality in all patients on dialysis, patients on dialysis awaiting transplantation, and recipients of a first cadaveric transplant. N Engl J Med
341: 1725–1730, 1999.
4. Renal transplantation in congenital and metabolic diseases. A report from the ASC/NIH renal transplant registry. JAMA
232: 148–153, 1975.
5. Chelamcharla M, Javaid B, Baird BC, et al
: The outcome of renal transplantation among systemic lupus erythematosus patients. Nephrol Dial Transplant
22: 3623–3630, 2007.
6. Javaid B, Goldfarb-Rumyantzev AS. Renal allograft and patient survival in patients with systemic lupus erythematosus: worse than expected? Am J Transplantation
5(suppl 11): 520, Abstract 1432, 2005.
7. Nyberg G, Karlberg I, Svalander C, et al
: Renal transplantation in patients with systemic lupus erythematosus: Increased risk of early graft loss. Scand J Urol Nephrol
24: 307–313, 1990.
8. Stone JH, Amend WJ, Criswell LA: Outcome of renal transplantation in ninety-seven cyclosporine-era patients with systemic lupus erythematosus and matched controls. Arthritis Rheum
41: 1438–1445, 1998.
9. CDC: Incidence of systemic lupus erythematosus. Available at: http://www.cdc.gov/arthritis/basics/lupus.htm
. Accessed on June 2, 2011.
10. Petersen E, Baird BC, Shihab F, et al
: The impact of recipient history of cardiovascular disease on kidney transplant outcome. ASAIO J
53: 601–608, 2007.
11. Krikov S, Khan A, Baird BC, et al
: Predicting kidney transplant survival using tree-based modeling. ASAIO J
53: 592–600, 2007.
12. Tang H, Hurdle JF, Poynton M, et al
: Validating prediction models of kidney transplant outcome using single center data. ASAIO J
57: 206–212, 2011.
13. Goldfarb-Rumyantzev AS: Personalized medicine and prediction of outcome in kidney transplant. Am J Kidney Dis
56: 817–819, 2010.
14. Ojo AO, Leichtman AB, Punch JD, et al
: Impact of pre-existing donor hypertension and diabetes mellitus on cadaveric renal transplant outcomes. Am J Kidney Dis
36: 153–159, 2000.
15. Terasaki PI, Cecka JM, Gjertson DW, Takemoto S: High survival rates of kidney transplants from spousal and living unrelated donors. N Engl J Med
333: 333–336, 1995.
16. Yao G, Albon E, Adi Y, et al
: A systematic review and economic model of the clinical and cost-effectiveness of immunosuppressive therapy for renal transplantation in children. Health Technol Assess
10: iii–iv, ix–xi, 1–157, 2006.
17. Goldfarb-Rumyantzev AS, Scandling JD, Pappas L, et al
: Prediction of 3-yr cadaveric graft survival based on pre-transplant variables in a large national dataset. Clin Transplant
17: 485–497, 2003.
18. Mark Hall, Eibe Frank, Geoffrey Holmes, et al
: The WEKA data mining software: An update. SIGKDD Explorations
11: 10–18, 2009.
19. Weka Website. Available at: http://www.cs.waikato.ac.nz/ml/weka/
. Accessed on June 2, 2011.
20. SIGKDD Service Award. Available at: http://www.sigkdd.org/awards_service.php#2005s
. Accessed on June 2, 2011.
21. KDnuggets News on SIGKDD Data Mining and Knowledge Discovery Service Award. Available at: http://www.kdnuggets.com/news/2005/n13/2i.html
. Accessed on June 2, 2011.
22. KDnuggets. Winner of SIGKDD Data Mining and Knowledge Discovery Service 2005 [3/14/2011]. Available at: http://www.kdnuggets.com/news/2005/n13/2i.html
. Accessed on June 2, 2011.
23. Tang H, Chelamcharla M, Baird BC, et al
: Factors affecting kidney-transplant outcome in recipients with lupus nephritis. Clin Transplant
22: 263–272, 2008.
24. Zhang W: State-Space Search: Algorithms, Complexity, Extensions, and Applications. Springer, 1999.
25. Pearl J: Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison-Wesley Pub (Sd), 1984.
26. Goldberg DE: Genetic Algorithms in Search, Optimization, and Machine Learning
. Reading, MA, Addison-Wesley Pub, 1989.
27. Goldfarb-Rumyantzev AS, Hurdle JF, Baird BC, et al
: The role of pre-emptive re-transplant in graft and recipient outcome. Nephrol Dial Transplant
21: 1355–1364, 2006.
28. Kasiske BL, Snyder JJ, Matas AJ, et al
: Preemptive kidney transplantation: The advantage and the advantaged. J Am Soc Nephrol
13: 1358–1364, 2002.
29. Goldfarb-Rumyantzev AS, Hurdle JF, Scandling JD, et al
: The role of pretransplantation renal replacement therapy modality in kidney allograft and recipient survival. Am J Kidney Dis
46: 537–549, 2005.
30. Matignon R: Neural Network Modeling Using Sas Enterprise Miner. AuthorHouse, 2005.
31. Levenberg K: A method for the solution of certain problems in least squares. Quart Appl Math
2: 164–168, 1944.
32. DeLong ER, DeLong DM, Clarke-Pearson DL: Comparing the areas under two or more correlated receiver operating characteristic curves: A nonparametric approach. Biometrics
44: 837–845, 1988.
33. Puri ML. Nonparametric Methods in Multivariate Analysis
. Malabar, FL, Krieger Pub Co, 1991.
34. A blinded, randomized clinical trial of mycophenolate mofetil for the prevention of acute rejection in cadaveric renal transplantation. The Tricontinental Mycophenolate Mofetil Renal Transplantation Study Group. Transplantation
1996; 61: 1029–1037.
35. Rippin SJ, Serra AL, Marti HP, Wüthrich RP: Six-year follow-up of azathioprine and mycophenolate mofetil use during the first 6 months of renal transplantation. Clin Nephrol
67: 374–380, 2007.
36. Ojo AO, Meier-Kriesche HU, Hanson JA, et al
: Mycophenolate mofetil reduces late renal allograft loss independent of acute rejection. Transplantation
69: 2405–2409, 2000.
37. Meier-Kriesche HU, Ojo AO, Leichtman AB, et al
: Effect of mycophenolate mofetil on long-term outcomes in African American renal transplant recipients. J Am Soc Nephrol
11: 2366–2370, 2000.
38. Knight SR, Russell NK, Barcena L, Morris PJ: Mycophenolate mofetil decreases acute rejection and may improve graft survival in renal transplant recipients when compared with azathioprine: A systematic review. Transplantation
87: 785–794, 2009.
39. Brier ME, Ray PC, Klein JB: Prediction of delayed renal allograft function using an artificial neural network. Nephrol Dial Transplant
18: 2655–2659, 2003.
40. Nguyen T, Malley R, Inkelis S, Kuppermann N: Comparison of prediction models for adverse outcome in pediatric meningococcal disease using artificial neural network and logistic regression analyses. J Clin Epidemiol
55: 687–695, 2002.
41. Lee SM, Kang JO, Suh YM: Comparison of hospital charge prediction models for colorectal cancer patients: Neural network vs. decision tree models. J Korean Med Sci
19: 677–681, 2004.
42. Dreiseitl S, Ohno-Machado L: Logistic regression and artificial neural network classification models: A methodology review. J Biomed Inform
35: 352–359, 2002.
43. Pieringer H, Biesenbach G: Risk factors for delayed kidney function and impact of delayed function on patient and graft survival in adult graft recipients. Clin Transplant
19: 391–398, 2005.
44. Goldfarb-Rumyantzev A, Hurdle JF, Scandling J, et al
: Duration of end-stage renal disease and kidney transplant outcome. Nephrol Dial Transplant
20: 167–175, 2005.