Kidney transplantation provides greater long-term survival and improved quality of life when compared with dialysis. It is now considered the treatment of choice for patients with end-stage renal disease (ESRD).1-3 With the advent of calcineurin inhibitor (CNI)-based maintenance immunosuppressive therapy in the 1980s, there was a significant decline in acute rejection rates and a concurrent improvement in graft survival rates.2 However, these gains have not led to sustained improvement in long-term graft survival.4 Reasons for the lack of improvement in long-term graft survival remained unclear, and most late graft losses were attributed to either chronic allograft nephropathy or death with a functioning graft (causes of death include cardiovascular disease, infections and malignancies).5 Calcineurin inhibitor nephrotoxicity has been linked to chronic allograft nephropathy.6 Calcineurin inhibitors also contribute to hypertension, hyperlipidemia, posttransplant diabetes and attendant cardiovascular complications.7-9 The perception that these unintended consequences of CNIs hinder long-term graft survival has led to efforts to institute CNI minimization strategies.10
The narrow therapeutic window afforded by CNIs makes regular monitoring a necessary means of ensuring adequate immunosuppressive efficacy while simultaneously averting the injurious side effects that curtail overall graft survival. In current practice, this is accomplished by pharmacokinetic assays based on monitoring trough concentrations (C0, predose) of the CNIs, tacrolimus (tac) and cyclosporine (Csa). Appraisal of drug exposure by obtaining multiple blood samples to derive area under the concentration-time curve has been shown to correlate with clinical outcomes.3 However, this multiple-sampling strategy is both expensive and inconvenient.11 C0 levels have been shown to correlate poorly with drug exposure estimated by area under the curve measurements, calling into question the practice of monitoring trough concentrations. None of these pharmacokinetic parameters are a true reflection of the biologic effects of CNIs at a cellular level.1,12
CsA and tac are CNIs that bind to immunophilins, cyclophilin, and FKBP-12, respectively. These CNI-immunophilin complexes suppress T-cell activation by inhibiting calcineurin phosphatase activity, thereby preventing the nuclear translocation of the transcription factor, nuclear factor of activated T (NFAT) cells, and the subsequent synthesis of several key cytokines, including IL-2, INF [Latin Small Letter Gamma] and granulocyte macrophage colony-stimulating factor.1,13 Pharmacodynamic assays, based on an understanding of these molecular events that underpin the therapeutic effect of CNIs, may offer a genuine assessment of the biologic consequences of these drugs.
In an observational study of 133 stable kidney transplant recipients (KTR), Sommerer et al14 demonstrated a correlation between the suppression of NFAT-regulated gene expression by CsA and frequency of infectious and malignant complications. They noted an increased risk of recurrent infections and malignant complications in patients with less than 15% residual expression (RE) of NFAT-regulated genes. Multiple cross-sectional analyses and a few observational analyses have looked at NFAT-regulated gene expression in patients on tac-based regimens and found that lower mean residual NFAT-regulated gene expression correlated with recurrent infections15 and cytomegalovirus (CMV) viremia,16,17 whereas rejection was more common with higher residual gene expression of NFAT-regulated genes.15,17 These findings were confirmed in a recent study that monitored patients early posttransplant.18 However, whether use of assays that measure NFAT regulated gene expression can be used to guide tac dosing is not known.
Because a tac-based regimen remains the dominant regimen in transplantation and will remain that way for the foreseeable future, we felt it a worthwhile endeavor to pursue a single-center, randomized, controlled pilot trial involving stable KTR receiving tac-based maintenance immunosuppressive therapy to assess the feasibility of implementing a real-time polymerase chain reaction (RT-PCR)–based pharmacodynamic assay to adjust dosing of tac.
MATERIAL AND METHODS
Patient Recruitment and Eligibility
The study population included stable KTR 18 years or older, at the University of California, San Francisco Medical Center who were maintained on triple immunosuppressive therapy with tac, mycophenolic acid, and prednisone (5 mg daily). Patients who had no prior episodes of rejection and had a 6-month protocol biopsy that showed no evidence of acute cellular rejection or antibody mediated rejection by Banff 2013 criteria were eligible for enrollment within 2 months of their protocol biopsy and were followed for the 1 year study period. Charts of patients without rejection on their protocol biopsy were reviewed. Patients were approached for enrollment at their post protocol biopsy clinic visits if they met inclusion criteria. The first 40 patients that consented were enrolled and randomized equally to 2 arms via a random number generator in excel. The study was approved by the institutional review board (IRB) at University of California, San Francisco Medical Center and written informed-consent was obtained from all patients at the time of enrollment.
Sample Preparation for NFAT
Heparinized peripheral blood was stimulated with 1 mL of complete RPMI 1640 containing 100 ng/mL phorbol-12-myristate-13-acetate and 5 μg/mL ionomycin (Sigma) for 3 hours at 37°C. After red cell lysis with ACK buffer (0.15 M NH4CI, 1.0 mM KHCO3), leukocytes were lysed with 400 μL of lysis/-binding buffer and total RNA was isolated using High Pure RNA Isolation Kit (Roche Diagnostics) according to the manufacturer's protocol. The elution volume was set to 50 μL. RNA was quantified using ND-8000 Spectrometer (NanoDrop Technologies, Thermo fisher Scientific, USA) and 240 ng of RNA was reverse transcribed using SuperScript III reverse transcriptase and oligo-(dT) as a primer (First Strand cDNA synthesis kit; Invitrogen, Carlsbad, CA). At the end of cDNA synthesis, the reaction mix was diluted to 100 μl and stored at −20°C until PCR.
Gene expression was quantified using CFX 96 RT-PCR Detection System (BioRad, Hercules, CA). Target sequences were amplified using commercially available RT2 quantitativePCR Primer Sets (Qiagen, Frederick, MD) with RT2 SYBR green quantitativePCR Mastermix (Qiagen) according to the manufacturer’s protocol. For quantitation of messenger RNA expression levels, we used the 2(−[INCREMENT][INCREMENT]CT) method. Gene expression was normalized to β-actin (Qiagen).
The residual gene expression after tac intake was calculated as T1.5/T0*100, where T0 is the adjusted number of transcripts at tac predose level and T1.5 is the number of transcripts 1.5 hours after drug intake. For all 3 genes, the RE was averaged and presented as mean RE (MRE) of NFAT-regulated genes.
Figure 1 depicts the data gathered and frequency of allowable adjustments per protocol in our study. The intervention (INT) arm allowed tac dose adjustments based on levels of NFAT-dependent cytokine gene expression. At enrollment, expression of 3 NFAT-dependent cytokines—IL-2, interferon-γ, and granulocyte macrophage colony-stimulating factor—was measured by RT-PCR at 2 time points, T0 (pre-dose) and T1.5 (1.5 hours after oral tac dose), in both arms. Residual expression of each gene was calculated as T1.5/T0 × 100. Mean RE was calculated as the average expression of the 3 genes. Mean RE was considered a measure of degree of suppression of NFAT-regulated cytokine genes by tac.15 In patients randomized to the INT arm, daily dose of tac was reduced by 15% if the MRE was less than 20%. If the MRE of the 3 cytokines was greater than 60%, the daily dose of tac was increased by 15%. However, for safety reasons, tac trough levels could not be lower than 4 μg/L or higher than 12 μg/L and were adjusted based on the assay within this range. The 15% daily dose change was based on prior studies using this dose adjustment.19 The MRE cutoffs were based on approximate cutoffs correlated with over and under immunosuppression in prior studies.14,15 In the INT arm, MRE levels are also remeasured 6 months post enrollment and a second adjustment was made if the above criteria were met.
Patients whose tac dose was adjusted based on trough levels served as controls. For this arm, adjustment of immunosuppression was based on tac trough levels as per standard of care at our institution with goal tac trough levels beyond 6 months ranging between 4 and 7 μg/L. Tacrolimus trough levels and serum creatinine were measured monthly in both arms. Tacrolimus doses could be adjusted based on levels monthly in the control arm. Tacrolimus levels were drawn both at our institution and at Kaiser laboratories (for patients with Kaiser insurance) where it is measured by immunoassay as well as at Quest Diagnostics and Labcorp diagnostics where LC-MS/MS is used.
Per our protocol only patients with rejection or borderline rejection by Banff 2013 criteria at the 6-month mark undergo another protocol biopsy at 12 months. For this reason, no patients in our study qualified for a follow-up protocol biopsy. Biopsies were performed on a for-cause basis only.
In addition, our institution did not routinely send DSAs at the time of protocol biopsies unless there were histopathologic findings consistent with antibody medicated rejection. We changed this practice in 2015, and DSAs were sent at the time of protocol biopsy for the last 7 patients who enrolled.
Study period was 1 year from enrollment and data including infections, hospitalizations, and rejection episodes was collected by chart review. Infections were ascertained either by documentation in the chart and/or review of microbiology results. Hospitalizations were captured by chart review. Rejections were captured by chart review and confirmed by review of pathology reports. Rejection was defined by the Banff 2013 criteria.
Comparisons of demographic and transplant-specific characteristics by study arm were performed for categorical variables using chi-squared tests and for medians (± interquartile range [IQR]) using Mann-Whitney U tests. We assessed the correlation between tac trough levels and median MRE in both groups at enrollment and in the intervention group at 6 months using the Spearman rank correlation coefficient. Unless otherwise specified, data were analyzed using an intention-to-treat approach. Statistical analyses were performed using Graph Prism software. All statistical tests were 2-sided, and P less than 0.05 was considered statistically significant (SS).
Demographics are displayed in Table 1. A total of 40 patients, 20 randomized to the INT arm and 20 to the CTL arm were enrolled. Persons in the CTL arm were younger (median age, 44 years; IQR, 36-51 years vs 56 years; IQR, 40-65 years; P = 0.025) and more likely to be male. There was no SS difference between groups with respect to race, cause of ESRD, presence of diabetes (as defined by on medication for diabetes), Type of donor, CMV D+/R− status, induction regimen or calculated panel-reactive antibody (cPRA). Seven patients had DSAs sent at the time of protocol biopsy. All were negative for DSAs.
There was no difference in renal function at enrollment between the 2 groups (median estimated glomerular filtration rate [eGFR], 79; IQR, 62.8-91.8 mL/min per 1.73 m2 INT vs 79.5; IQR, 57.3-94.0 CTL, CKD-EPI; P = 0.87) and no difference in tac trough levels at enrollment: median 9.9 μg/L (IQR, 7.8-12.1) INT and 8.8 μg/L (IQR, 7.3-10.0) CTL (P = 0.22). There was no difference in the total daily dose of mycophenolic acid between groups at enrollment (median dose, 2000 mg/d INT; IQR, 2000-2000 vs 1750 mg/d CTL; IQR, 1500-2000) and 6 months post enrollment (2000 mg/d; IQR, 1125-2000 vs 1500 mg/d CTL; IQR, 1500-2000; P = 0.45). The median MRE in the INT arm at enrollment was 41.4 (IQR, 25.2-61.6) compared with 25.3 (IQR, 20.3-37.2) in the CTL arm which was SS IP = 0.03).
Tacrolimus Dose Adjustments
Seventeen of 20 patients in the CTL arm had their dose of tac adjusted throughout the study. Dose adjustments in the CTL arm were more likely to be dose reductions (16/17), with only 1 dose increase.
In the INT arm, 8 patients had their tac dose adjusted based on residual NFAT-dependent cytokine expression (5 increased, 3 decreased) at enrollment and 9 patients (4 increased and 5 decreased) had their doses adjusted at 6 months post enrollment based on the second MRE measurement (total thirteen patients).The median number of adjustments in the INT arm was 2 (IQR, 1-3) and CTL arm was 2 (IQR, 1-2), which was not statistically different.
Intervention Patients Off Protocol
Figure 2 depicts the patients in INT arm off protocol. Five patients in the INT arm had their tac doses adjusted off protocol within the first 3 months due to the following reasons: 3 of 5 developed infections, 1 had a tac trough greater than 12, and 1 patient died. Four additional patients in the INT arm had their tac doses adjusted off protocol in the subsequent 3 months due to the following reasons: 2 tac trough >12, 1 neurotoxicity, 1 other.
Five additional patients in the INT arm had their tac doses adjusted off protocol between month 6 and the end of the study for the following reasons: 1 infection, 1 tac trough greater than 12, 3 other.
Six patients in the INT arm remained on protocol for the 12 months of the study.
Those patients taken off protocol in the first 6 months of the study for neurotoxicity, infectious complications, and tac trough levels greater than 12 had a median enrollment MRE of 58.4 (IQR, 42.2-68.3) versus an MRE of 28.4 (IQR, 13.1-67.1) in those not taken off protocol (P = 0.16). In those patients taken off protocol in the last 6 months of the study, those taken off had a median MRE at 6 months of 73 (IQR, 15.5-112) compared with 23.75 (IQR, 15.1-77.9) (P = 0.71). All had MRE results that would have required their tac dose to either remain the same (MRE > 20 but ≤ 60) or be increased (MRE >60) (Figure 2).
Outcomes INT Versus CTL arm
Table 2 lists the clinical outcomes. There were 10 subjects with infectious complications noted in the INT group (3 BKV infection, 1 osteomyelitis, 1 epiglottitis, 1 mycobacterium avium complex reactivation, 3 upper respiratory infections, and 1 viral gastroenteritis) and 6 infections noted in the CTL group (2 BKV infection, 1 CMV (the patient was CMV D+/R−), 2 urinary tract infections, 1 viral gastroenteritis). This difference did not meet statistical significance (P = 0.33). Five patients in each arm were hospitalized over the course of the study. One patient in the INT arm was diagnosed with a basal cell skin cancer. One patient in INT arm and 2 patients in the CTL arm were biopsied subsequent to the enrollment biopsy. One patient in the INT arm and 1 in the CTL arm were biopsied for a rise in creatinine. The other patient in the CTL arm underwent a 12-month protocol biopsy for unclear reasons. The patient in the INT arm met criteria for borderline rejection (Banff T2 I1). The 2 patients in the CTL arm had no rejection on biopsies (T1I1 and T1I0 by Banff 2013). There was 1 death in the INT arm. A lack of a SS difference between these outcomes remained when only those patients in the INT arm that remained on protocol were included in the analyses (INT arm on protocol with infections 2 vs CTL at 1 year 6, P = 0.877).
Median eGFR (CKD-epi) in the INT versus CTL arms 6 and 12 months post enrollment were 79 mL/min per 1.73 m2 (IQR, 67.5-90.3) versus 82 mL/min per 1.73 m2 (IQR, 60.0-96.0) (P = 0.80) and 88 mL/min per 1.73 m2 (IQR 72.0-92.0) versus 74.5 mL/min per 1.73 m2 (IQR 59.8-97.0) (P = 0.33), respectively. Median tac trough levels at 12 and 18 months posttransplant in the INT versus CTL arms (6 and 12 months post enrollment) were 7.9 μg/L (IQR, 6.5-9.6) versus 6.6 μg/L (IQR, 5.8-9.4) (P = 0.42 and 6.9 μg/L) (IQR 5.7-9.6) versus 7.2 μg/L (IQR 5.3-9.4) (P = 0.67), respectively. These findings held when only those patients in the INT arm that remained on protocol were included in the analyses (data not shown).
There was no correlation between clinical variables including age at transplant, eGFR, and baseline MRE (Spearman r P = 0.49; P = 0.25, respectively). There was no correlation between tac trough level at enrollment and MRE (P = 0.73) or at 6 months post enrollment (P = 0.29) (Figure 3).
There was no SS difference in MRE at enrollment between those who did and did not develop infectious complications throughout the study (yes infections median MRE, 36.4%; IQR, 22.0-61.3 vs no infections MRE, 28.0%; IQR, 21.8-50.0; P = 0.2). There was no SS difference in tac trough levels at enrollment between those who did and did not develop infectious complications (median tac trough, 8.85; IQR, 7.3-11.6 and 8.95; IQR, 7.1-11.6, respectively; P = 0.79).
As another exploratory analysis, we were interested in comparing MRE levels at enrollment and infectious complications in those KTR who had no tac adjustments in the first 6 months.
In KTR whose tac dose was not adjusted in the first 6 months (n = 8 in INT arm and 5 in CTL arm), KTR with infections had a statistically lower MRE at enrollment compared with those without infections (MRE, 21.8; IQR, 21.0-26.6 vs 39; IQR, 25.9-55.1; P = 0.049). The same was not true for tac trough levels (P = 0.8). The differences at 1 year trended toward lower MRE for those with infections (MRE, 23.8; IQR, 21.0-26.6 vs 53.8; IQR, 28.4-59.0) but it did not reach statistical significance (P = 0.2) (Figure 4).
In our single-center, randomized, controlled feasibility pilot trial involving stable KTR receiving tac-based maintenance immunosuppressive therapy we sought to assess the feasibility of implementing a RT-PCR based pharmacodynamic assay in adjusting the dosing of tac. We found that in patients who were 6 months posttransplant with a protocol biopsy that showed no evidence for rejection, adjusting tac based on the NFAT-dependent cytokine assay appears to be feasible without any SS difference in infectious complications, hospitalizations, malignancies, or rejections to adjusting it based on tac trough levels. It is important to note that although not SS, there were more infections in the INT arm. Although there was no SS difference in MRE at enrollment between those who did and did not develop infectious complications throughout the study, those patients with MRE less than 20% in the INT arm had their tac doses immediately reduced per protocol, and 17/20 patients in the CTL also had their tac doses reduced, so that association may have been masked by the intervention. Interestingly, as an observational analysis, the small group of patients whose tac dose was not adjusted in the first 6 months (n = 8 in INT arm and 5 in CTL arm), KTR with infections had a statistically lower MRE at enrollment compared with those without infections (MRE, 21.8; IQR, 21.0-26.6 vs 39; IQR, 25.9-55.1; P = 0.049). The same was not true for tac trough levels (P = 0.80). The differences at 1 year trended toward lower MRE for those with infections, but it did not reach statistical significance (P = 0.20). This finding suggests that a lower MRE in our study was also associated with infectious complications.
There were a significant number of patients whose clinical condition led to the adjustment of tac off protocol in the INT arm. Of note, all patients, who were taken off protocol by a clinician, had their tac dose reduced (when per protocol, it should have remained the same or increased). In other words, a high MRE was not a reliable marker for underimmunosuppression and rejection. Of the 5 patients with an enrollment MRE greater than 60 in our study, 0 had a rejection episode. While per protocol those with MRE greater than 60% in the INT arm would have their dose of tac increased which may have prevented an episode, 3 of 5 were taken off protocol and ultimately had their tac doses reduced. In fact, despite a wide range of MRE in both INT and CTL groups, only 1 patient suffered a borderline rejection episode. The patient’s MRE at the 1 year mark (1 month before her borderline rejection episode) was 58%.
Analyzing the group of patients in the INT arm who were taken off protocol for infections, neurotoxicity, or tac trough levels greater than 12, we found that the median MRE was higher at 58.4 (IQR, 42.2-68.3) vs an MRE of 28.4 (IQR, 13.1-67.1) in those not taken off protocol (P = 0.16). In those patients taken off protocol in the last 6 months of the study, those taken off protocol had a median MRE of 73 (IQR, 15.5-112) compared with 23.75 (IQR, 15.1-77.9) (P = 0.71). All had MRE results that would have required their tac dose to either remain the same (MRE > 20 but <60) or be increased (MRE > 60). Although not SS, there is clearly a trend toward a higher MRE in those taken off protocol. Thus, in our study, a higher MRE did not reflect inadequate cytokine suppression putting patients at risk for rejection. This is in line with other studies using this assay that found a well-defined low MRE cutoff associated with infection and malignancy, with a less well defined and more variable range of MRE cutoff associated with rejection.14,15,18 And in fact, increasing tac based on a high MRE may have been responsible for more infectious complications in the INT arm. Although other assays in development have stronger correlations with predicting rejection episodes,20,21 there are very few, if any assays, that predict overimmunosuppression.
There are several limitations to our study. The most important one is the small sample size with just 40 individuals enrolled which is not powered enough to show a real difference between adjusting immunosuppression based on pharmacodynamics versus pharmacokinetics. In addition, despite randomization, the 2 groups were not well matched for sex or for age and the intervention group started off with a statistically significantly higher MRE compared with the CTL group. All of these factors likely influenced the results. Perhaps the fact that the INT group was older influenced the higher incidence of infections, for example. Further, because we enrolled patients at 6 months having not had any rejection episodes since transplant with protocol biopsies without rejection, by design, our patients were already at low risk for rejection at enrollment, so lack of correlation between this pharmacodynamics assay and rejection in our cohort may have been due to the study design. Further, the high rate of patients in the INT arm taken off protocol, although clinically appropriate, likely limited the findings of the study. This may have been avoided had we measured the MRE of cytokine levels at more than just 2 time points allowing for a more accurate assessment of immune status and allowing the investigators to adjust tac on a monthly basis. Finally, although we found no correlation between tac trough levels and MRE, other studies that measured peak tac levels did find a correlation between peak and MRE.15,18 In clinical practice, only tac trough levels are measured, so we sought to keep the study as close to real clinical management of this patient population as possible.
In conclusion, our study found that adjusting tac based on the pharmacodynamics assay measuring nuclear factor of activated T cell–regulated gene expression is feasible to implement and may be of most value in the setting of strong cytokine inhibition where the assay reflects overimmunosuppression better than trough levels alone. This is notable, because the main limitations of tac-based therapy is not lack of efficacy but persistent concerns regarding toxicities and complications from unavoidable overimmunosuppression. Any assay that can accurately identify impaired immunity, before an infectious complication, is a worthwhile endeavor. Studies that are powered with enough subjects to assess safety in using quantitative analysis of NFAT-regulated gene expression as an assay to lower tac dosing are needed. In addition, future studies are needed to estimate the ideal MRE cutoff for lowering tac and to investigate the full potential of NFAT-dependent cytokine expression for pharmacodynamics monitoring.