Secondary Logo

Journal Logo

Technical Notes

Deep Learning Approach for Generating MRA Images From 3D Quantitative Synthetic MRI Without Additional Scans

Fujita, Shohei MD*,†; Hagiwara, Akifumi MD, PhD*; Otsuka, Yujiro BSc*,‡; Hori, Masaaki MD, PhD*,§; Takei, Naoyuki MS; Hwang, Ken-Pin PhD; Irie, Ryusuke MD*,†; Andica, Christina MD*; Kamagata, Koji MD, PhD*; Akashi, Toshiaki MD, PhD*; Kunishima Kumamaru, Kanako MD, PhD*; Suzuki, Michimasa MD, PhD*; Wada, Akihiko MD, PhD*; Abe, Osamu MD, PhD; Aoki, Shigeki MD, PhD*

Author Information
doi: 10.1097/RLI.0000000000000628

Abstract

Multiparametric imaging techniques that enable simultaneous acquisition of quantitative T1, T2, and proton density mapping with a single acquisition are increasingly being investigated.1–4 These quantitative maps can be used to synthesize multiple inherently aligned contrast-weighted images,5,6 potentially reducing the lengthy scan time in the clinical MR acquisition. One of these multiparametric mapping techniques with subsequent contrast-weighted image synthesis is quantitative synthetic magnetic resonance imaging (MRI).7–9 Recently, the 3D-QALAS (3D-quantification using an interleaved Look-Locker acquisition sequence with T2 preparation pulse) sequence10 was introduced for brain imaging, which enables both 3D acquisition of the whole brain in high resolution and postacquisition synthesis of multiple contrast-weighted images from a single 3D scan. These images include T1-weighted, T2-weighted, fluid-attenuated inversion recovery, double inversion recovery, and phase-sensitive inversion recovery images. 3D-QALAS has been reported to show high repeatability in terms of morphometry11 and relaxometry.12

However, there are several types of images for which 3D quantitative synthetic MRI data are difficult to generate. These include magnetic resonance angiography (MRA) images, which are widely used clinically to accurately evaluate vascular anatomy and which are essential for the management of intracranial aneurysms.13 Intracranial aneurysms have a prevalence of up to 3% to 5% in the general adult population, and their rupture leads to fatal subarachnoid hemorrhages.14,15 For detecting intracranial aneurysms, time-of-flight (TOF) MRA, based on the principle of inflow-effect,16 is typically used due to its noninvasive nature and high accuracy and specificity.17,18 Despite its utility, TOF-MRA is not always included in clinical scanning protocols because of the additional scan time required. Due to the longer scan time, clinicians must decide whether to add TOF-MRA sequence to the scanning protocol before the scan, which may leave unexpected vascular abnormalities undetected.

It should be stressed that inflow-effect is not only limited to TOF-MRA. Although most sequences are not designed to emphasize this effect, both spin-echo and gradient-echo sequences inherently contain information of inflow-effect. Thus, it is reasonable to assume that there is a consistent, albeit subtle, signal intensity difference between the blood vessel and background that can be extracted from images from these sequences without additional scanning, as long as the background tissue signal can be suppressed firmly and the blood vessel signal can be isolated.

Deep learning using convolutional neural networks could be an effective approach to extract blood vessel signal specifically. Deep learning is a subfield of machine learning19 and has been increasingly used in medical imaging.20 The applications of deep learning have demonstrated promising results in neuroradiology including in segmentation,21,22 lesion detection,23–25 image generation,26–28 and differential diagnosis.29 A previous study used a generative adversarial network-based technique to synthesize MRA images from conventional T1-weighted and T2-weighted images.27 This approach synthesizes MRA images from existing MR image databases that lack MRA contrast and might be useful in retrospective studies. However, since the contrast-weighted images are not innately aligned, misregistration could possibly lead to poorer output image quality.

Here, we propose a deep learning approach for generating MRA images using only the data acquired with 3D quantitative synthetic MRI, without any additional scan time. The purpose of this study was to develop and evaluate the feasibility of deep learning algorithm for generating MRA images from 3D-QALAS sequences, obtained from both healthy volunteers and patients with known intracranial aneurysms.

MATERIALS AND METHODS

Participant Data

To train and validate the proposed network, 11 neurologically healthy volunteers without a history of a major medical condition or vascular disorder (2 women and 9 men; mean age, 27.4 ± 4.2 years; age range, 20–34 years) were included in the study. This study was approved by the local institutional review board, and informed consent was obtained from all healthy volunteers before inclusion in the study.

To evaluate delineation of intracranial vessels and aneurysms, 4 asymptomatic patients with known intracranial aneurysms (1 aneurysm in each participant; 1 woman and 3 men; mean age, 69.7 ± 6.1 years; age range, 61–76 years) were included in this study. Informed consent was not required for these patients because of the retrospective nature of analysis.

MRI Acquisition

All subjects were scanned on a 3-T scanner (Discovery MR750w; GE Healthcare, Milwaukee, WI) with a 32-channel head coil. Time-of-flight MRA and 3D-QALAS sequences were acquired in the same session for each participant. The 3D-QALAS sequence is based on multiacquisition 3D gradient echo, with 5 acquisitions equally spaced in time, interleaved with a T2 preparation pulse and an inversion pulse.7 Therefore, a total of 5 original images were produced for each slice (Fig. 1). The acquisition plane of 3D-QALAS was set parallel to that of the TOF-MRA. The scan parameters of the 3D-QALAS were as follows: axial acquisition; TR/TE, 7.6/3.0 milliseconds; inversion delay times, 100, 1000, 1900, and 2800 milliseconds; T2-prep echo time, 100 milliseconds; field of view (FOV), 256 × 256 × 146 mm; matrix size, 256 × 256 × 146 (reconstruction matrix, 512 × 512 × 292); flip angle, 4 degrees; receiver bandwidth, 122 Hz/pixel; parallel imaging factor = 2 (phase direction); acquisition time, 11 minutes 11 seconds. The scan parameters of the TOF-MRA were as follows: axial acquisition; TR/TE, 16/2.7 milliseconds; FOV, 200 × 180 mm; matrix size, 416 × 224 (reconstruction matrix, 512 × 512); section thickness, 1.0 mm (0.5 mm reconstruction thickness); flip angle, 18 degrees; receiver bandwidth, 162.7 Hz/pixel; acquisition time, 4 minutes 31 seconds. Images obtained from the 3D-QALAS sequence were processed on a prototype version 0.45.14 of the SyMRI software (SyntheticMR, Linköping, Sweden) to synthesize T1 maps and 3D synthetic T1-weighted images (with postprocessing TR, 500 milliseconds; TE, 10 milliseconds) for showing an example of synthetic MRI processing.

FIGURE 1
FIGURE 1:
A representative slice showing the 5 original images of a 3D-QALAS sequence. The 3D-QALAS sequence is based on multiacquisition 3D gradient echo, with 5 acquisitions equally spaced in time, interleaved with a T2 preparation pulse and an inversion pulse. Acq, acquisition.

Deep Learning Framework

The proposed network architecture for generating MRA from 3D-QALAS images is illustrated in Figure 2. We utilized a combination of a single convolution and a U-net model in this work to design a mapping function to convert the 5 raw 3D-QALAS images to their corresponding MRA images. The model was trained by feeding image slices of 3D-QALAS with corresponding TOF-MRA from each training subject slice by slice. Once the network was trained, it could be used on 3D-QALAS data of new subjects to generate MRA.

FIGURE 2
FIGURE 2:
An illustration of the proposed network architecture for generating MRA images from 3D-QALAS sequence. The network is designed to output the weighted average of a single convolution of 3 × 3 × 3 at 5 channels and pixel-wise classification results by the U-net. The single convolutional part determines the 30 feature patterns, which are then weighted by the classification part. The pixel-wise classification results are softmax activated so that all classification scores add up to 1. Conv, convolution; ReLU, rectified linear units.

Network Architecture

The proposed network was carefully designed to minimize the risk of generating false objects and erasing true objects. In particular, each pixel value of the output image is generated by a combination of the corresponding 3 × 3 × 3 (27) pixels of the input images with only one convolution, to minimize the possibility of producing false images. The network consists of 2 parallel subnets: the single convolutional subnet (upper half of Fig. 2) and the classification subnet (lower half of Fig. 2). The single convolutional subnet convolutes the 5 raw images with the 3 × 3 × 3 kernel. This subnet generates 30 feature patterns for each pixel, which are then weighted by the classification subnet to create the final output. The classification subnet utilizes a U-net architecture. The pixel-wise classification results are softmax activated so that all classification scores add up to 1. The final output is the average signal intensities of the single convolution output weighted by pixel-wise classification results of the classification U-net. The overall network design was structured such that each pixel signal value of the output image is synthesized by a linear combination of a localized group of corresponding pixels of the input image. This was intended to restrict the networks from creating factitious lesions from multiple convolutions. Such constraints are particularly important when dealing with medical images since an artifact in medical image synthesis may lead to misdiagnosis in patient care.

Network Training Procedure

Data from 11 normal subjects were used as the training data. In the training procedure, the 3D volume MR data of 5 raw images of 3D-QALAS and the reference TOF-MRA were inputted to the architecture as a stack of 2D axial images. Network training was performed on randomly selected patches, each cropped from the original image to a size of 128 × 128 × 64 at 5 channels. Patches were fed into the network after a data augmentation, in which rotations of 90 degrees were performed, with random flips along either x- or y-axis. Furthermore, the entire signal value within a set of images was multiplied by a factor of 1 + N[0,0.01], where N[0,0.01] is a random sample from a normal distribution whose mean is zero and whose variance is 0.01. Mean squared error with the corresponding TOF-MRA patch was calculated as the loss function. Sixteen edge voxels at both ends of the x- and y-axes and one edge voxel at both ends of the z-axis were ignored when calculating the loss function. In addition, to reduce spatial deviation between the output and target images, the mean squared error was calculated for the low-resolution images created by 3D max pooling with kernel size 4 using both the output image and the target image, and was added to the loss function. The ignored edges for this low-resolution image were 4 voxels at both ends of the x- and y-axes and one voxel at both ends of the z-axis. The batch size was initially set to 1 up to 150 epochs, and then to 20. The optimizer was Adam,30 with α = 0.0002, β1 = 0.9, and β2 = 0.999.

Implementation

All model training was performed on a workstation with 64 GB of CPU memory, Xeon E5–2670 v3 CPU (Intel), and a TITAN Xp graphics processing unit (NVIDIA, Santa Clara, CA). The neural network was coded with Python 3.6 (https://www.python.org/downloads/release/python-360/) and the deep learning framework of Chainer 5.0.0 (http://chainer.org/).

Evaluation of the Proposed Network

The performance of the proposed network was evaluated using a 5-fold cross validation. Nine of 11 scans were sequentially selected for training, and the remaining 2 scans were used as the test data. Data from all 11 scans were used as the test data at least once. Hyperparameters, including the number of epochs, were kept the same across all folds.

Image Quality Assessment

For comparison, we also generated MRA images using a simple arithmetic approach described previously.31 In the linear model, the output voxel value is calculated, pixel by pixel, with a linear combination of the corresponding 5 input voxels, defined as follows:

Figure
Figure

where, Soutput is the signal intensity of the output, Si is the signal intensity of the ith input image, αi is the ith coefficient, and b is the intercept. The αi and b are adjusted throughout the dataset to minimize the signal difference of the sum of all voxels between the output and TOF-MRA images. Therefore, the linear model consists of 6 constants that convert the 5 input raw images to TOF-like image in a linear combination. The MRA images generated by the proposed deep learning network and by the linear combination approach are hereafter denoted as DL-MRA images and linear-MRA images, respectively.

Quantitative Evaluation

To evaluate the contrast-to-noise ratio (CNR), axial image of internal carotid artery (ICA) at the cavernous level was selected for each subject. In each image slice, 4 circular regions of interest (ROIs) with a diameter of 2 mm were selected—2 ROIs within the bilateral ICA and 2 ROIs in the bilateral temporal white matter that was considered to be the background. The mean and standard deviation of the signal intensities were recorded for these ROIs. Noise was defined as the standard deviation of the signal intensity in the background. Based on these data, the CNR was calculated separately for the right and left hemisphere as follows:

Figure
Figure

where, SIICA and SIbackground represents the mean signal intensity of the ICA and background, respectively. Statistical comparisons were performed after pooling all slice measurements from all the subjects in each group (ie, linear-, DL-, and TOF-MRA).

The MRA image quality for each subject was quantified using 2D maximum intensity projection of the MRA images in the craniocaudal direction by 3 metrics: peak signal-to-noise ratio (PSNR),32 structural similarity index measurements (SSIMs),32,33 and high frequency error norm (HFEN).34 Images with higher PSNR, higher SSIM, and lower HFEN indicate higher image quality. Peak signal-to-noise ratio is a standard image quality metric and is defined as the ratio of the peak intensity value of the reference image to the root mean square reconstruction error relative to the reference image. Several studies have shown that PSNR is not very well matched to the human visual system35,36 and by itself is insufficient to assess the quality of a medical MR image.37 Hence, we also measured SSIM and HFEN. Structural similarity index measurement is a perception-based model that is more consistent with evaluation by human vision; SSIM measures the structural similarity of 2 images, rather than estimating the absolute errors of the two. High frequency error norm has been proposed to quantify the quality of reconstruction of edges and fine features of MRI.34 In HFEN, a rotationally symmetric Laplacian of a Gaussian filter is used to capture the high frequency information within the MRI. High frequency error norm is calculated as the l2 norm of the extracted features between the reference and the measured images. We used a filter kernel size of 15 × 15 pixels with a standard deviation of 1.5 pixels.34 Peak signal-to-noise ratio, SSIM, and HFEN were calculated for DL-MRA and linear-MRA, each relative to TOF-MRA.

Qualitative Evaluation

To assess the variation among the intracranial arteries of the subjects, a board-certified radiologist (A.H.) with 10 years of experience in neuroradiology evaluated each subject by examining their TOF-, DL-, and linear-MRA axial images. Arteries that were invisible on all MRA images were considered deficient. After excluding these deficient branches from further evaluation, the overall image quality and branch visualization, each measured on a 5-point Likert scale, were independently and blindly rated by 2 board-certified radiologists (K.K. and C.A.), with 12 and 9 years of experience in neuroradiology, respectively. The evaluated branches were as follows: ICA, siphon and petrous portions; ophthalmic artery (OA); anterior cerebral artery, A1 and A2; middle cerebral artery, M1 and M2; vertebral artery; basilar artery; posterior inferior cerebellar artery (PICA); anterior inferior cerebellar artery; superior cerebellar artery; and posterior cerebral artery, P1 and P2. Right and left sides were evaluated together. The 5-point Likert scale was defined as follows: overall image quality (1, nondiagnostic; 2, poor; 3, moderate; 4, good; 5, excellent) and visibility of branches (1, not visible; 2, poor visibility; 3, moderate visibility; 4, good visibility; 5, excellent visibility).38

Statistical Analysis

All statistical analyses were performed using R program version 3.3.0 [R Core Team (2016) R]. Paired t tests were used for comparison of PSNR, SSIM, and HFEN between DL-MRA and linear-MRA. The overall image quality, visual scores of each branch, and CNR were compared among DL-MRA, linear-MRA, and TOF-MRA using pairwise Dunn-Bonferroni post hoc test in case of significant difference in the Friedman test. Interobserver agreement was calculated using squared weighted Cohen's kappa (<0.2, poor; 0.2 to 0.4, fair; 0.41 to 0.6, moderate; 0.61 to 0.8, good; and 0.81 to 1.0, perfect agreement).39P values less than 0.05 were considered statistically significant.

RESULTS

Deep learning MRA images were successfully obtained in all subjects from 3D-QALAS raw images. Two subjects were right PICA deficient, and one subject was bilateral anterior inferior cerebellar artery deficient. The left PICA was outside the FOV in 3 subjects. Examples of DL-MRA, linear-MRA, and TOF-MRA images from a representative healthy volunteer are shown in Figure 3. Although the training stage took approximately 32 hours, once the network was trained, our proposed algorithm required about only 5 minutes to generate a whole-brain MRA from 3D-QALAS raw images. Figure 4 shows a representative DL-MRA image presented with synthetic T1-weighted image and T1 map, which were all obtained from a single scan. Deep learning MRA, linear-MRA, and TOF-MRA images of representative patients with intracranial aneurysms are shown in Figures 5 and 6, and Supplementary Digital Content, Figures 1, http://links.lww.com/RLI/A498 and 2, http://links.lww.com/RLI/A498. Mean size ± standard deviation of the aneurysm was 3.7 ± 0.4 mm when measured by TOF-MRA. The localization of the aneurysms was as follows: 1 at the anterior cerebral artery, 1 at the middle cerebral artery, and 2 at the ICA. Overall, DL-MRA successfully visualized aneurysms in all 4 patients with similar image presentation to TOF-MRA, whereas the contours of the aneurysms are obscured or the aneurysms look smaller on linear-MRA.

FIGURE 3
FIGURE 3:
Examples of generated MRA images from the linear combination approach (A), DL-based approach (B), and TOF-MRA (C). The first row shows representative axial slices, and the second row shows the corresponding transverse maximum intensity projections. The overall image quality was rated as 2, 5, and 5 by both neuroradiologists for the linear-MRA, DL-MRA, and TOF-MRA, respectively. DL, deep learning; TOF, time-of-flight.
FIGURE 4
FIGURE 4:
Representative deep learning MRA image (middle column) presented with synthetic T1WI (left column) and T1 maps (right column). The first row shows representative axial view, and the second row shows the coronal view. T1WI, T1-weighted image; DL, deep learning.
FIGURE 5
FIGURE 5:
A 61-year-old man with a right internal carotid artery aneurysm. Images based on linear combination approach (A), DL approach (B), and TOF-MRA (C). White arrows show the aneurysm. The first row shows representative axial slices, and the second row shows the corresponding coronal maximum intensity projections. Note that the aneurysm is poorly demarcated on (A). DL, deep learning; TOF, time-of-flight.
FIGURE 6
FIGURE 6:
A 66-year-old man with a right middle cerebral artery aneurysm. Images based on linear combination approach (A), DL approach (B), and TOF-MRA (C) are shown. White arrows show the aneurysm. The first row shows representative coronal slices, and the second row shows the corresponding maximum intensity projections. Note that the aneurysm looks smaller on (A) than (B) and (C). DL, deep learning; TOF, time-of-flight.

Quantitative Evaluation

The CNR of the DL-MRA, TOF-MRA, and linear-MRA images were significantly different from each other (76.9 ± 27.3, 31.3 ± 9.2, and 6.3 ± 4.1, respectively; P < 0.001). The PSNR of the DL-MRA was significantly higher than that of the linear-MRA (35.3 ± 0.5 vs 34.0 ± 0.5, P < 0.001). The SSIM of the DL-MRA was significantly higher than that of the linear-MRA (0.93 ± 0.02 vs 0.82 ± 0.02, P < 0.001). The HFEN of the DL-MRA was significantly lower than that of the linear-MRA (0.61 ± 0.08 vs 0.86 ± 0.05, P < 0.001).

Qualitative Evaluation

The results of the image quality evaluation are presented in Table 1. The overall image quality of the DL-MRA was comparable to that of TOF-MRA (4.2 ± 0.7 vs 4.4 ± 0.7, P = 0.99). The overall image quality of linear-MRA was significantly lower than that of both DL-MRA and TOF-MRA (P < 0.001). The results of Friedman test showed significant differences in the visibility of all arterial branches across DL-MRA, TOF-MRA, and linear-MRA. Post hoc pairwise comparisons showed that both DL-MRA and TOF-MRA provided significantly better intracranial branch visualizations than linear-MRA for all branches (P < 0.05). The one exception was for OA, where TOF-MRA was superior to both DL-MRA and linear-MRA (2.3 ± 1.2 vs 1.2 ± 0.5 vs 1.1 ± 0.2, respectively; P < 0.001), and where there was no difference between DL-MRA and linear-MRA (P = 0.49) (Supplementary Digital Content, Fig. 3, http://links.lww.com/RLI/A498). No other significant differences in branch visibility were identified between DL-MRA and TOF-MRA. The interobserver agreement was good for DL-MRA, TOF-MRA, and linear-MRA with squared weighted Cohen's kappa of 0.73, 0.69, and 0.72, respectively.

TABLE 1
TABLE 1:
Qualitative Metrics for Evaluating Intracranial Arteries Among DL-MRA, Linear-MRA, and TOF-MRA

DISCUSSION

In this work, we implemented a deep learning approach for generating MRA images from 3D-QALAS raw images to visualize intracranial arteries. This was achieved without any additional scanning and produced MRA with comparable overall image quality to TOF-MRA. In addition, we demonstrated that a deep learning architecture trained only on healthy volunteer data was applicable to patients with intracranial aneurysms, demonstrating the strength and flexibility of the proposed network. To our knowledge, our study is the first to generate MRA images from 3D-QALAS raw data.

In our study, the proposed DL-MRA showed significantly better quantitative metrics of image quality (ie, SSIM, PSNR, and HFEN) compared with a classical linear combination approach. In addition, DL-MRA images, when compared with TOF-MRA, had comparable overall image quality and visualization of all intracranial vessels, except OA. If these results are confirmed and validated in future studies with a larger number of participants scanned on different scanners, DL-MRA may function as a screening tool to detect lesions of major intracranial arteries, without additional scan time. However, visualization of small segments of intracranial arteries, namely, OA, was significantly inferior to TOF-MRA. In cases like preoperative evaluation of aneurysms or examination of patients with subarachnoid hemorrhage, TOF-MRA would still need to be acquired to obtain detailed morphological information of small vessels and aneurysms.

Our deep learning approach with 3D-QALAS scan simultaneously provides an MRA image and other contrast-weighted images, as well as T1, T2, and proton density maps of the whole brain. All of these contrast-weighted images and quantitative maps were generated from the same scan providing they were perfectly aligned. Note that the 3D isotropic image resolution allows visualization in any view, eliminating the need for multiple scans with different orientations, as is required by 2D acquisitions.40 One future application of our deep learning approach could be carotid plaque imaging, where MRA image and plaque characterization are needed at the same time for proper diagnosis.41,42

It is noteworthy that veins were not depicted in our DL-MRA images, which supports the assumption that the inflow effect is the underlying principle of artery visualization in our deep learning approach. Since the detected inflow effect is present in both healthy populations and patients regardless of age or medical conditions, our proposed deep learning algorithm was able to successfully visualize intracranial arteries in elderly patients with known aneurysms, despite being trained exclusively on healthy and younger volunteer data.

Deep learning is a powerful tool, but exactly how the architecture arrived at its predictions cannot be explained, since the mechanism is intrinsically a black box.43,44 Particularly, in the case of medical imaging, careful attention is required not to create false objects. In this study, we did not use multiple convolutional layers, thus restricting the generated images from deranging the spatial information and minimizing the risk of creating pseudolesions. Therefore, the risk of generating large, false aneurysms that would affect clinical management should have been minimized.

Generating MRA images from routine T1- and T2-weighted images would be highly versatile. However, in routine clinical examinations, the usual slice thickness is approximately 3 to 5 mm, which would be substantially thicker than that of the inputs used in this study (0.5 mm). Resolution in the slice direction is critical to generate high-quality MRA because the smoothness of the vessels depends on spatial resolution. The use of 3D T1- and T2-weighted images as inputs, as described by Olut et al,27 would be one approach; however, this procedure requires multiple scans and cannot eliminate the effects of misregistration. One substantial advantage of our study is that we used innately aligned images as input for the deep learning algorithm, instead of using data acquired from multiple scans, as done in other studies.27 The raw images obtained with multiple TIs contained various contrasts between the arteries and extravascular tissues, with higher arterial signal in some images and contrary in the others. The use of various contrasts with inherent alignment should have been advantageous for the deep learning framework to generate strong arterial signal while suppressing the background. This observation is supported by the fact that CNR of the DL-MRA images was higher than that of the TOF-MRA and linear-MRA images. The highest CNR observed with DL-MRA images may be because of the suppressed and smoothed background signals. The signal intensity of ICA was similar between DL-MRA and TOF-MRA images (207 ± 19 vs 201 ± 26 arbitrary units; P = 0.395), whereas the standard deviation of the signal intensity of background was much smaller in DL-MRA images than that in TOF-MRA images (2.1 ± 0.6 vs 4.9 ± 1.2 arbitrary units; P < 0.01). Previous studies have shown that postprocessing of multiparametric imaging based on a single scan, instead of conventional images, could generate MRA-like images.45,46 These imaging sequences were based on a single scan, which eliminated the possibility of registration error. However, these studies did not compare the image quality with TOF-MRA images as a standard criterion or use multiple quantitative metrics as well as qualitative evaluation by expert radiologists.

The proposed method is not limited to 3D-QALAS but could potentially be used to generate MRA images from other MR sequences that have high resolution in slice direction, capture the inflow effect, and produce innately co-aligned images, such as MR fingerprinting.3 Another major advantage of our approach is the potential reduction of the total scan time. Time-of-flight MRA requires additional scan time, and the decision to include MRA in the scanning protocol must be made prospectively, before the MR examination. In contrast, our deep learning approach can generate MRA images after the scan without additional scanning or need for patient recall. This not only improves patient satisfaction but also improves examination workflow.

One limitation of this study is the small number of patients and types of pathologies included. Complex blood flow in lesions such as brain aneurysms, arterial stenosis, arteriovenous malformations, and intratumoral shunts could affect signal intensities, and thus may derange the quality of the output MRA images. The acquisition resolution of 3D-QALAS was 1 mm isotropic, and the reconstruction resolution was 0.5 mm isotropic, both of which are comparable with that of standard MRA. Hence, although we could not include patients with arterial pathologies other than aneurysm in our study, the quality of DL-MRA images may be sufficiently high to evaluate arterial diseases. Second, we only used one scanner in this study. Hence, future studies should assess the generalizability of our results using different scanners. Further investigation is needed before this approach is introduced into clinical practice. Third, although the scanning time in this study was acceptable in clinical situations, further research that combines quantitative synthetic MRI with techniques such as compressed sensing, a well-established acceleration technique in conventional TOF-MRA,47,48 may be useful to further reduce scan time and strengthen the clinical usefulness.

In conclusion, we developed a deep learning approach that enables generation of MRA images based on a single 3D-QALAS scan. We evaluated the MRA images produced for healthy volunteers and patients with intracranial aneurysms, showing comparable quality of DL-MRA relative to TOF-MRA and better quality than MRA created through a simple linear combination approach. Magnetic resonance angiography generated by our approach has the potential to reduce total scan time and may be useful for the screening of unexpected intracranial vascular lesions.

REFERENCES

1. Warntjes JB, Leinhard OD, West J, et al. Rapid magnetic resonance quantification on the brain: optimization for clinical usage. Magn Reson Med. 2008;60:320–329.
2. Deoni SC, Rutt BK, Peters TM. Rapid combined T1 and T2 mapping using gradient recalled acquisition in the steady state. Magn Reson Med. 2003;49:515–526.
3. Ma D, Gulani V, Seiberlich N, et al. Magnetic resonance fingerprinting. Nature. 2013;495:187–192.
4. Cheng CC, Preiswerk F, Hoge WS, et al. Multipathway multi-echo (MPME) imaging: all main MR parameters mapped based on a single 3D scan. Magn Reson Med. 2019;81:1699–1713.
5. Ma D, Jones SE, Deshmane A, et al. Development of high-resolution 3D MR fingerprinting for detection and characterization of epileptic lesions. J Magn Reson Imaging. 2019;49:1333–1346.
6. Riederer SJ, Lee JN, Farzaneh F, et al. Magnetic resonance image synthesis. Clinical implementation. Acta Radiol Suppl. 1986;369:466–468.
7. Hagiwara A, Warntjes M, Hori M, et al. SyMRI of the brain: rapid quantification of relaxation rates and proton density, with synthetic MRI, automatic brain segmentation, and myelin measurement. Invest Radiol. 2017;52:647–657.
8. Wallaert L, Hagiwara A, Andica C, et al. The advantage of synthetic MRI for the visualization of anterior temporal pole lesions on double inversion recovery (DIR), phase-sensitive inversion recovery (PSIR), and myelin images in a patient with CADASIL. Magn Reson Med Sci. 2018;17:275–276.
9. Hagiwara A, Hori M, Yokoyama K, et al. Synthetic MRI in the detection of multiple sclerosis plaques. AJNR Am J Neuroradiol. 2017;38:257–263.
10. Kvernby S, Warntjes M, Engvall J, et al. Clinical feasibility of 3D-QALAS - Single breath-hold 3D myocardial T1- and T2-mapping. Magn Reson Imaging. 2017;38:13–20.
11. Fujita S, Hagiwara A, Hori M, et al. 3D quantitative synthetic MRI-derived cortical thickness and subcortical brain volumes: scan-rescan repeatability and comparison with conventional T1 -weighted images. J Magn Reson Imaging. 2019;50:1834–1842.
12. Fujita S, Hagiwara A, Hori M, et al. Three-dimensional high-resolution simultaneous quantitative mapping of the whole brain with 3D-QALAS: an accuracy and repeatability study. Magn Reson Imaging. 2019. [Epub ahead of print].
13. Kapsalaki EZ, Rountas CD, Fountas KN. The role of 3 tesla MRA in the detection of intracranial aneurysms. Int J Vasc Med. 2012;2012:792834.
14. Etminan N, Rinkel GJ. Unruptured intracranial aneurysms: development, rupture and preventive management. Nat Rev Neurol. 2016;12:699–713.
15. Vlak MH, Algra A, Brandenburg R, et al. Prevalence of unruptured intracranial aneurysms, with emphasis on sex, age, comorbidity, country, and time period: a systematic review and meta-analysis. Lancet Neurol. 2011;10:626–636.
16. Nishimura DG. Time-of-flight MR angiography. Magn Reson Med. 1990;14:194–201.
17. White PM, Wardlaw JM, Easton V. Can noninvasive imaging accurately depict intracranial aneurysms? A systematic review. Radiology. 2000;217:361–370.
18. Yan R, Zhang B, Wang L, et al. A comparison of contrast-free MRA at 3.0T in cases of intracranial aneurysms with or without subarachnoid hemorrhage. Clin Imaging. 2018;49:131–135.
19. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521:436–444.
20. Greenspan H, van Ginneken B, Summers RM. Guest editorial deep learning in medical imaging: overview and future promise of an exciting new technique. IEEE Trans Med Imaging. 2016;35:1153–1159.
21. Moeskops P, Viergever MA, Mendrik AM, et al. Automatic segmentation of MR brain images with a convolutional neural network. IEEE Trans Med Imaging. 2016;35:1252–1261.
22. Pereira S, Pinto A, Alves V, et al. Brain tumor segmentation using convolutional neural networks in MRI images. IEEE Trans Med Imaging. 2016;35:1240–1251.
23. Qi D, Hao C, Lequan Y, et al. Automatic detection of cerebral microbleeds from MR images via 3D convolutional neural networks. IEEE Trans Med Imaging. 2016;35:1182–1195.
24. Chen L, Bentley P, Rueckert D. Fully automatic acute ischemic lesion segmentation in DWI using convolutional neural networks. Neuroimage Clin. 2017;15:633–643.
25. Ueda D, Yamamoto A, Nishimori M, et al. Deep learning for MR angiography: automated detection of cerebral aneurysms. Radiology. 2019;290:187–194.
26. Hagiwara A, Otsuka Y, Hori M, et al. Improving the quality of synthetic FLAIR images with deep learning using a conditional generative adversarial network for pixel-by-pixel image translation. AJNR Am J Neuroradiol. 2019;40:224–230.
27. Olut S, Sahin YH, Demir U, et al. Generative Adversarial Training for MRA Image Synthesis Using Multi-Contrast MRI. 1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherlands. 2018.
28. Liu F, Jang H, Kijowski R, et al. Deep learning MR imaging-based attenuation correction for PET/MR imaging. Radiology. 2018;286:676–684.
29. Wada A, Tsuruta K, Irie R, et al. Differentiating Alzheimer's disease from dementia with Lewy bodies using a deep learning technique based on structural brain connectivity. Magn Reson Med Sci. 2018.
30. Kingma DP, Ba J. Adam: a method for stochastic optimization. arXiv e-prints. 2014. Available at: https://ui.adsabs.harvard.edu/\#abs/2014arXiv1412.6980K. Accessed December 1, 2014.
31. Fujita S, Hagiwara A, Hori M, et al. Synthetic MR angiography: a feasibility study of MR angiography based on 3D synthetic MRI. Proceedings of the 27th Annual Meeting of ISMRM. 2019;1808.
32. Hore A, Ziou D. Image Quality Metrics: PSNR vs. SSIM. Proceedings of the 20th International Conference on Pattern Recognition. 2010.
33. Wang Z, Bovik AC, Sheikh HR, et al. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process. 2004;13:600–612.
34. Ravishankar S, Bresler Y. MR image reconstruction from highly undersampled k-space data by dictionary learning. IEEE Trans Med Imaging. 2011;30:1028–1041.
35. Teo PC, Heeger DJ. Perceptual image distortion. Proceedings of 1st International Conference on Image Processing. 1994;2:982–986.
36. Eskicioglu AM, Fisher PS. Image quality measures and their performance. IEEE Transactions on Communications. 1995;43:2959–2965.
37. Sun L, Fan Z, Ding X, et al. A divide-and-conquer approach to compressed sensing MRI. arXiv. 2018.
38. Taron J, Weiss J, Notohamiprodjo M, et al. Acceleration of magnetic resonance cholangiopancreatography using compressed sensing at 1.5 and 3 T: a clinical feasibility study. Invest Radiol. 2018;53:681–688.
39. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159–174.
40. Fujita S, Nakazawa M, Hagiwara A, et al. Estimation of gadolinium-based contrast agent concentration using quantitative synthetic MRI and its application to brain metastases: a feasibility study. Magn Reson Med Sci. 2019;18:260–264.
41. Bie F, Cui L, Fan G, et al. Clinical validation of synthetic MRI in assessing composition of Carotid Atherosclerotic plaques: initial experience. Proceedings of the 27th Annual Meeting of ISMRM. 2019;2943.
42. Cui L, Bie F, Fan G, et al. Quantitative characterization of the carotid atherosclerotic plaque composition using synthetic MRI: a preliminary study with histological confirmation. Proceedings of the 27th Annual Meeting of ISMRM. 2019;2944.
43. Samek W, Wiegand T, Müller K. Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. ArXiv. 2017; abs/1708.08296.
44. Ribeiro MT, Singh S, Guestrin C. "Why should i trust you?": explaining the predictions of any classifier. ArXiv. 2016; abs/1602.04938.
45. Amemiya T, Yokosawa S, Taniguchi Y, et al. Simultaneous acquisition of MR angiography and 3D quantitative MR parameter maps. Proceedings of the 26th Annual Meeting of ISMRM. 2018;2776.
46. Gomez PA, Molina-Romero M, Buonincontri G, et al. Simultaneous magnetic resonance angiography and multiparametric mapping in the transient-state. Proceedings of the 26th Annual Meeting of ISMRM. 2018;63.
47. Fushimi Y, Fujimoto K, Okada T, et al. Compressed sensing 3-dimensional time-of-flight magnetic resonance angiography for cerebral aneurysms: optimization and evaluation. Invest Radiol. 2016;51:228–235.
48. Yamamoto T, Fujimoto K, Okada T, et al. Time-of-flight magnetic resonance angiography with sparse undersampling and iterative reconstruction: comparison with conventional parallel imaging for accelerated imaging. Invest Radiol. 2016;51:372–378.
Keywords:

convolutional neural network; deep learning; image synthesis; machine learning; magnetic resonance angiography; magnetic resonance imaging; QALAS; quantitative synthetic MRI; time-of-flight

Supplemental Digital Content

Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.