In recent years, most transplant laboratories have introduced solid phase assays in their diagnostic repertoire, in an effort to maximize the diagnostic accuracy of HLA antibody detection. Luminex-based assays using fluorescent HLA-coated microbeads (single-antigen beads [SAB]) are extremely sensitive, and with the use of broad panels of HLA antigens/alleles, they allow for detailed characterization of complex reactivity patterns.1 The advantages of SAB over previous serology-based technology have led to the implementation of SAB as the primary method to assess immunologic risk. However, the substantially increased sensitivity and granularity of results with SAB poses major interpretive challenges for clinical decision making.1
Preformed donor-specific HLA antibodies (DSA) uncovered by SAB associate with an increased risk of antibody-mediated rejection (AMR) and graft loss—even in patients with a negative flow cytometric crossmatch.2 The predictive accuracy of SAB results, however, has its limitations, as reflected by an uncomplicated posttransplant course in many DSA-positive patients. A major question remains: how to handle transplant candidates who present with preformed donor reactivity above the low thresholds of SAB assays. Clinical decisions may range from transplantation without any change in immunosuppression to recipient desensitization/potent induction therapy in an effort to prevent rejection or even not to proceed to transplantation (Figure 1).
Several variables have been proposed as a way to improve identification of clinically relevant DSA. These include mean fluorescence intensity (MFI) levels or HLA class specificity, number of detected DSA, or the use of SAB that detect complement-binding ability. MFI, a frequently used dimensionless parameter, is at best semiquantitative and may only partly reflect antibody concentrations and binding affinity. The interpretation of HLA antibody measurements is compromised by various technical variables pertaining to the SAB reagent (variable degrees of antigen density, presence of denatured HLA leading to false-positive reactivity) and interferences often present in patient sera (complement-mediated interference, bead saturation with antibody). However, even a perfect measure of antibody concentration and binding strength would have limited predictive power, because there may be many other immunologic factors—beyond a single snapshot of DSA detection in serum—that critically determine the fate of a transplanted organ.
Nonetheless, despite all these drawbacks, it has become evident that recording MFI levels might add significantly to the predictive accuracy of SAB-based DSA detection3,4 (the relevance of complement-binding assays in this context is still controversially discussed5). Now, a major challenge is the definition of uniform test thresholds that reliably guide clinical decisions, most importantly, the definition of unacceptable DSA precluding transplantation. In recent years, arbitrary MFI thresholds to define unacceptable antigen mismatches (UAM) have been proposed, for example, those published by Marfo et al6 or the German Society for Immunogenetics.7 Thresholds and definition criteria for UAM (and SAB thresholds), however, vary greatly, and their role regarding clinical decision making—in the context of crossmatch results—is not well defined (Figure 1).
In this issue of Transplantation, Zecher et al8 report on the results of a retrospective study reevaluating a single-center cohort of 211 recipients of a deceased donor renal allograft. All included patients were transplanted in the presence of a negative lymphocytotoxic crossmatch and were placed on a uniform immunosuppressive schedule consisting of IL-2 receptor induction and tacrolimus/mycophenolate mofetil-based standard treatment. Retrospective SAB analysis of pretransplant sera revealed that a considerable proportion of the recipients—approximately one third—were DSA-positive (MFI cutoff, >500). MFI levels of immunodominant DSA varied substantially, including cases with extensive levels. In their cohort, the authors applied 3 different algorithms for the definition of UAM, based on MFI (thresholds ranging between 3000 and 10 000), transplantation history, and/or HLA class specificity. The study revealed that a substantial number of their patients—17 to 31 subjects applying different UAM algorithms—had been transplanted across unacceptable DSA. As anticipated, outcomes were inferior among patients with SAB-UAM, but there were still patients who showed favorable transplant outcomes: 53% to 71% of the SAB-UAM patients did not experience early AMR, and a considerable proportion of patients maintained graft function during a median of 5 years of follow-up. Of course, these results should be interpreted with some caution, because protocol biopsies were not available and the follow-up period was limited. One may argue that AMR was actually more frequent but clinically silent in the first years after transplantation in many patients. In this respect, a relevant finding was that kidney function was worse in SAB-UAM patients, also in absence of biopsy-proven rejection.
The authors address an important issue: even in the presence of DSA, including high-level reactivity classified as SAB-UAM, there will remain a considerable proportion of patients with acceptable immunologic outcomes. They also point out that alloreactivity that would fulfill the criteria of different SAB-UAM algorithms was highly prevalent in waitlisted patients (23%-33% at their center). Considering the broadness of HLA sensitization in their transplant cohort, they argue that a systematic exclusion of patients on the basis of locally defined SAB-UAM, in the context of a centralized organ allocation scheme (the Eurotransplant kidney allocation system, which still mainly relies on the results of cellular assays), would have led to a considerable prolongation of waiting times.
Zecher et al8 point out the limitations associated with the use of arbitrary MFI thresholds for defining UAM. A major strength may be the high negative predictive value in relation to AMR occurrence, and in this respect, application of SAB-based allocation algorithms can be expected to markedly improve long-term outcomes. Positive predictive values, however, were low, and the use of such algorithms for center-specific decision making—but not for centralized allocation—would lead to an accumulation of highly sensitized patients on local waiting lists. Of course, a lower threshold translates into a higher level of reported sensitization levels and the larger the number of patients who will face extensive (often unjustified) waiting times.
Perhaps an optimal solution would be a uniform standard for the definition of UAM within (inter)national allocation systems, which by prioritizing highly sensitized patients aim at achieving a balance yielding comparable waiting times for sensitized and nonsensitized patients. Of course, this would require a high level of assay standardization between transplant laboratories and an agreed consensus regarding UAM thresholds, which should be carefully selected on the basis of a subtle balance between negative and positive predictive values. The major challenge remains the minor proportion of patients, who because of extensive HLA sensitization can be expected to accumulate on waiting lists. For such patients, test thresholds/ranges beyond conventional UAM definitions need to be defined, which allow for acceptable allograft outcomes under intensified immunosuppression and recipient desensitization.
Now, when solid phase antibody assays have become an integral part of our diagnostic repertoire, we will have to decipher their role as a risk stratification tool in organ transplantation and define standardized thresholds that reliably guide our clinical decisions. The present study may be a valuable part of this complicated puzzle, reinforcing the need for a systematic validation of current SAB-UAM algorithms in unbiased transplant cohorts.
1. Tait BD. Detection of HLA antibodies in organ transplant recipients—triumphs and challenges of the solid phase bead assay. Front Immunol
2. Mohan S, Palanisamy A, Tsapepas D, et al. Donor-specific antibodies adversely affect kidney allograft outcomes. J Am Soc Nephrol
3. Lefaucheur C, Loupy A, Hill GS, et al. Preexisting donor-specific HLA antibodies predict outcome in kidney transplantation. J Am Soc Nephrol
4. Schwaiger E, Eskandary F, Kozakowski N, et al. Deceased donor kidney transplantation across donor-specific antibody barriers: predictors of antibody-mediated rejection. Nephrol Dial Transplant
5. Otten HG, Verhaar MC, Borst HP, et al. Pretransplant donor-specific HLA class-I and -II antibodies are associated with an increased risk for kidney graft failure. Am J Transplant
6. Marfo K, Ajaimy M, Colovai A, et al. Pretransplant immunologic risk assessment of kidney transplant recipients with donor-specific anti-human leukocyte antigen antibodies. Transplantation
7. Süsal C, Seidl C, Schönemann C, et al. Determination of unacceptable HLA antigen mismatches in kidney transplant recipients: recommendations of the German Society for Immunogenetics. Tissue Antigens
8. Zecher D, Bach C, Staudner C, et al. Analysis of Luminex-based algorithms to define unacceptable HLA antibodies in CDC-crossmatch negative kidney transplant recipients. Transplantation