Augmented Reality With Cinematic Rendered 3-Dimensional Images From Volumetric Computed Tomography Data : Journal of Computer Assisted Tomography

Secondary Logo

Journal Logo

CT: Technology and Physics

Augmented Reality With Cinematic Rendered 3-Dimensional Images From Volumetric Computed Tomography Data

Rowe, Steven P. MD, PhD; Schneider, Robert PhD; Krueger, Sebastian PhD; Pryde, Valerie RT, CIIP; Chu, Linda C. MD; Fishman, Elliot K. MD

Author Information
Journal of Computer Assisted Tomography 47(1):p 67-70, 1/2 2023. | DOI: 10.1097/RCT.0000000000001381
  • Free

Abstract

Rapid advances in 3-dimensional (3D) visualizations of volumetric medical imaging data are ongoing.1,2 Although traditional maximum intensity projection and volume rendered images continue to be important interpretive adjuncts in modern medical imaging,2 there are developing technologies whose intersection may be an important next step in 3D visualization and interphysician communication.

One of those technologies is cinematic rendering (CR), a photorealistic 3D technique.3–5 Briefly, CR takes advantage of modern computing power by using universal-lighting and ray-tracing algorithms to enhance surface detail and add realistic shadowing to 3D rendered images.4 Cinematic rendering has been described for visualization of complex anatomy in the cardiovascular system,6,7 for imaging of musculoskeletal trauma,8 and for neuroradiology applications.9 Cinematic rendering aids comprehension of anatomy by medical students10 and surgeons.11 Novel CR presets can be tailored for visualization of intraluminal cardiac12 and gastrointestinal13 findings.

In parallel to the development of CR, virtual reality (VR)/augmented reality (AR) has also been increasingly incorporated into medical imaging.14 The combination of photorealistic CR images with a VR/AR interface can facilitate real-time discussions between imaging specialists and clinicians and allow multiple individuals from an interdisciplinary team to see and manipulate the CR images. Photorealistic images are necessary to create the proper depth and immersion to best leverage VR/AR emerging technology. In this article, we will describe our initial experience with the HoloLens headset as a display for CR images and offer observations on the interface and its potential future applications.

MATERIALS AND METHODS

As has been previously described,15 CR images are created at our institution on a standalone Siemens SyngoVia VB40 workstation (Siemens Healthineers, Erlangen, Germany). Briefly, volumetric computed tomography (CT) data are imported onto the workstation and then visualized with the CR algorithm with empirically determined presets appropriate for display of the tissue or pathology of interest. Unlike ray-casting in volume rendered images, CR makes use of complex path tracing that follows millions of photons through a volume and accurately models their potential interactions with matter. The resultant pixels in the CR image reflect a sum of the density and chosen colors of the component tissues in the voxels that comprise the pixel.

In our typical clinical workflow, the CR images for a case can be created in approximately 5 minutes, depending somewhat on the complexity of the case, the number of presets that are used, and the amount of adjustment of the presets required to optimize the images (due to patient-specific factors such as body habitus, intravenous contrast kinetics, etc). After finalizing the optimized images, they were transferred to a workstation (Precision Tower 5820 [Dell Technologies, Inc, Round Rock, Tex] with an upgraded GeForce RTX 2080 SUPER GPU [NVIDIA Corporation, Santa Clara, Calif]) that was linked to a HoloLens 2 Development Edition (Microsoft Corporation, Redmond, Wash). The CR application was a prototype of Cinematic Reality (Siemens Healthineers, Erlangen, Germany) version 0.3.5.0, which was partially created with Coin3D, DirectXTK, Mvp.Xmi, and Protocol Buffers open-source software.

Medical imaging data can be transferred to the CR application in 1 of 2 ways: (1) they can be manually downloaded from the image repository to an external drive (eg, a universal serial bus [USB]), or (2) the images can be network transferred through a digital imaging and communications in medicine (DICOM) node connection. For the images to be uploaded properly to the CR system, they must be in a DICOM format, meaning they contain the DICOM directory file or DICOM Dir. In DICOM node-to-node transfer, if the study is not in a DICOM file format, the DICOM association will fail. The entire DICOM file must be available for the image volume to render in the viewing application. We only used USB for this instance because we did not have the ability to directly attach a DICOM transfer node from our picture archiving and communication system to the viewing system. The DICOM files were manually moved by exporting them to a USB from the Syngo.via Image viewer platform. Of note, electronically sending data through the network via DICOM transfer is one of the most common ways to transfer medical images, and this approach will be available at many institutions.

For the DICOM transfer to associate, both the sender and the receiver system must have the proper DICOM Application Entity Title, reserved IP address, and a dedicated DICOM port configured. Then, the 2 devices can communicate using the DICOM protocol, in which both systems must agree on several parameters before they can officially form a connection or association. One of the 2 devices must initiate an association to the other device; it will then negotiate the connection and ask for specific services, information, and type of encoding or transfer syntaxes. The negotiation step allows the initiating DICOM device to propose a certain function such as print, store, query, or display in a compressed or uncompressed format. If the receiving device accepts the association, the medical data can now be exchanged.

For the images created for this article, it was decided to proceed with a USB DICOM transfer so time was not consumed on configuration and troubleshooting of a network connection. Once the DICOM data were manually uploaded to the USB, we were able to import the data to the image viewer on the workstation associated with the HoloLens. With the CR prototype, there was a limitation of only being able to open one series at a time, rather than the entire study.

Once the series were uploaded to the application, clicking on the “Open” button opened the data set and a HoloLens tutorial appears on the screen. It presents a 4-step video tutorial to aid setup for users:

  1. Set window level: This allows the end user to manually window level the image or select a created preset window level. This is how the image will look when viewing the hologram in the HoloLens. Unfortunately, at this time, the application does not allow active window leveling while viewing a hologram.
  2. Adjust HoloLens: This shows the end user how to adjust the HoloLens headset for the best fit. It also displays a pop-up screen for eye calibration, which is important to calibrate each time you are viewing holograms or sharing a headset.
  3. Pair HoloLens: This is a walkthrough of how to open the Holographic Remote Player application. When the application is opened, it will prompt the user to enter the IP address number into the “HoloLens Pairing” section of the application. The pairing section also allows an option to “Mirror the HoloLens” view on the computer screen. This was a very important feature, because this was the only way people not using the HoloLens could see the image being manipulated.
  4. Interacting with HoloLens: This walkthrough shows the user how to manipulate the hologram image, within the HoloLens.

For the HoloLens to pair properly to the CR application, it had to be on the same network as the computer on which the application is installed. In our enterprise environment, the wireless network is on a different subnet than our computer using the ethernet connection. In this situation, we had to connect a wireless router between the ethernet wall-jack and the computer. This allowed the HoloLens to connect to the wireless router and pair to the prototype application.

RESULTS

The HoloLens headset adjusts to fit the user and can be worn over prescription glasses. Once the headset is powered on and adjusted, the user holds up his/her left hand to access the “Welcome” screen as shown in Figure 1. From the Welcome screen, the user is prompted to make a hand motion that allows him/her to “pick up” the CR image from the workstation and move it into the virtual space of AR. The user is now able to view and manipulate the CR image as a hologram in AR (see examples in Figs. 2 and 3). Among the controls available to the user are the ability to zoom in and zoom out on the image, rotate the image, and apply cut-planes to the image.

F1
FIGURE 1:
The “welcome” screen that users first encounter while using the AR system. This screen is accessed when the user holds his/her left hand up within the visual field of the HoloLens headset and provides instructions on how to “pick up” the CR image and move it as a hologram in AR space. Figure 1 can be viewed in color online at www.jcat.org.
F2
FIGURE 2:
A-D, Example CR images projected into AR space. Note the varying objects behind the AR hologram as the user moves around to view the hologram from different angles. The hologram can also be moved and manipulated with hand gestures, as described in the text. The windowing has been set so that the arteries and highly vascularized or dense structures are visible. In this clinical example, an abdominal CT angiogram was used and revealed a replaced right hepatic artery arising from the celiac axis (arrows) in a patient who planned to undergo a Whipple procedure. Figure 2 can be viewed in color online at www.jcat.org.
F3
FIGURE 3:
A-C, Additional examples of CR images projected into AR space. In this example, a cut-plane approximating the midaxillary line allows for the demonstration of a normal tracheobronchial tree. The user's hand is visible in (B) and (C) and is manipulating the image in real time. Figure 3 can be viewed in color online at www.jcat.org.

Figures 2 and 3 are examples of CR images projected in AR, as captured in real time by a user wearing the HoloLens headset. In Figure 2, the CR image is from a CT angiogram of the upper abdomen in a patient scheduled to undergo a Whipple procedure for pancreatic ductal adenocarcinoma. A relevant anatomic variant was identified (replaced right hepatic artery off of the celiac trunk, arrows). Note the variable objects in the background of the CR hologram as the user moves around the projected image. Figure 3 shows a normal tracheobronchial tree with a cut-plane applied at approximately the midaxillary line. Again, various objects are visible in the background in AR space, and in Figures 3B and C, the user's hand is present. In this case, the “pinched” configuration of the thumb and forefinger indicate the user is manipulating the image in real time.

With the CR image projected into AR, the user can make a brief squeezing motion with his/her hand and outline the image. A white cube surrounds the CR image (Fig. 4A), and the user can either manipulate the image from the corners of the cube (pinching a corner and moving in or out will zoom the image) or, with another hand motion, covert the cube to have linear projections from the middle of the sides (Figs. 4B, C), which permit the user to spin the image or apply cut-planes. Please note that in capturing images for the figures in this article, it was difficult to film the cube, hence this feature of the software is not shown. The user can also walk around the CR hologram to see obscured anatomy or pathology, although this should be done with caution in an area cleared of trip hazards, because the user's vision is restricted by the headset and the projected CR hologram.

F4
FIGURE 4:
A-C, Cinematic rendering images in AR space with aspects of image manipulation visible. In (A), a white cube outlines the volumetric display of this chest CT angiogram (arrows). The thicker areas of the lines that compose the cubes provide locations for the user to “grip” the volume and rotate in various planes. (B) and (C) show that the user has changed the cube to an orange color, which now allows a cut-plane to be applied. In (C), the arrowhead indicates a linear axis projecting from the nearest face of the cube, which the user can manipulate to alter the cut-plane. Figure 4 can be viewed in color online at www.jcat.org.

DISCUSSION

As demonstrated by the examples in this article, the combination of photorealistic 3D CR images with an AR/VR headset/display has the potential to be revolutionary in interprovider communications and patient/trainee counseling and education. Given the improvements in comprehension of anatomy that have been reported for both medical students10 and surgeons,11 it would be expected that one of the primary uses of the combination of CR and AR/VR would be the discussion of anatomy, normal variants, and complex pathology among members of multidisciplinary teams who all have access to the AR/VR environment.

For example, an emergency radiologist might convene, virtually or in person, with a vascular surgeon to discuss findings on a CT angiogram; both providers would be able to access the AR/VR environment through the use of HoloLens headsets and could discuss any salient findings. Particularly in the context of the ongoing pandemic,16 it is imperative for radiologists to find ways to communicate subtle findings and function as viable consultants for our clinical colleagues. In the era of the COVID-19 pandemic, it is more difficult than ever before for radiologists to be available to their colleagues, due to a combination of working from home and maintaining social distancing at work. The emerging technological intersection of CR and AR/VR is tailored to the new reality of the COVID era, with the ability to have multiple users, all remote from each other, logged into the AR/VR environment as the CR images are discussed.

There are other potential advantages of the combination of CR and AR/VR. The rise of artificial intelligence (AI) as a driving force in the future of radiology17 suggests that new visualization methods for volumetric data may be important as data inputs for graphical processing unit–driven AI workstations. Adding AR/VR to CR visualizations may allow nonradiologists to have input into the images that are sent to the picture archiving and communication system and facilitate the incorporation of clinical data into ensemble AI algorithms.18

The approach described in this article could potentially be replicated with other hardware and software. For example, other rendering software can be modified with a CR lighting model to recapitulate the photorealistic images that were used in our study. Furthermore, there are other AR/VR solutions such as the Oculus headset (Oculus Studies, Menlo Park, Calif) or the True 3D tablet and pen (EchoPixel, Inc, Santa Clara, Calif). The potential availability of competitive platforms should help make the combination of CR with AR/VR more widely accessible.

The primary limitation of the current study is that it is descriptive and does not present specific data that would confirm the added value of CR and AR/VR. Ultimately, well-designed prospective studies will be necessary to evaluate the utility of the combination of CR and AR/VR to medical diagnostics.

REFERENCES

1. Fishman EK, Bluemke DA, Soyer P. Three-dimensional imaging: past, present and future. Diagn Interv Imaging. 2016;97:283–285.
2. Rowe SP, Fishman EK. Image processing from 2D to 3D. In: Medical Radiology. Springer Verlag; 2019:103–120.
3. Dappa E, Higashigaito K, Fornaro J, et al. Cinematic rendering—an alternative to volume rendering for 3D computed tomography imaging. Insights Imaging. 2016;7:849–856.
4. Eid M, De Cecco CN, Nance JW Jr., et al. Cinematic rendering in CT: a novel, lifelike 3D visualization technique. AJR Am J Roentgenol. 2017;209:370–379.
5. Johnson PT, Schneider R, Lugo-Fagundo C, et al. MDCT angiography with 3D rendering: a novel cinematic rendering algorithm for enhanced anatomic detail. AJR Am J Roentgenol. 2017;209:309–312.
6. Rowe SP, Johnson PT, Fishman EK. Cinematic rendering of cardiac CT volumetric data: principles and initial observations. J Cardiovasc Comput Tomogr. 2018;12:56–59.
7. Zimmerman SL, Rowe SP, Fishman EK. Cinematic rendering of CT angiography for visualization of complex vascular anatomy after hybrid endovascular aortic aneurysm repair. Emerg Radiol. 2021;28:839–843.
8. Rowe SP, Fritz J, Fishman EK. CT evaluation of musculoskeletal trauma: initial experience with cinematic rendering. Emerg Radiol. 2018;25:93–101.
9. Rowe SP, Zinreich SJ, Fishman EK. 3D cinematic rendering of the calvarium, maxillofacial structures, and skull base: preliminary observations. Br J Radiol. 2018;91:20170826.
10. Binder JS, Scholz M, Ellmann S, et al. Cinematic rendering in anatomy: a crossover study comparing a novel 3D reconstruction technique to conventional computed tomography. Anat Sci Educ. 2021;14:22–31.
11. Elshafei M, Binder J, Baecker J, et al. Comparison of cinematic rendering and computed tomography for speed and comprehension of surgical anatomy. JAMA Surg. 2019;154:738–744.
12. Rowe SP, Chu LC, Recht HS, et al. Black-blood cinematic rendering: a new method for cardiac CT intraluminal visualization. J Cardiovasc Comput Tomogr. 2020;14:272–274.
13. Rowe SP, Chu LC, Fishman EK. Cinematic rendering with positive oral contrast: virtual fluoroscopy. J Comput Assist Tomogr. 2019;43:718–720.
14. Gehrsitz P, Rompel O, Schöber M, et al. Cinematic rendering in mixed-reality holograms: a new 3D preoperative planning tool in pediatric heart surgery. Front Cardiovasc Med. 2021;8:633611.
15. Rowe SP, Chu LC, Meyer AR, et al. The application of cinematic rendering to CT evaluation of upper tract urothelial tumors: principles and practice. Abdom Radiol (NY). 2019;44:3886–3892.
16. Weisberg EM, Chu LC, Rowe SP, et al. Radiology, COVID-19, and the next pandemic. Diagn Interv Imaging. 2021;102:583–585.
17. Rowe SP. Artificial intelligence in molecular imaging: at the crossroads of revolutions in medical diagnosis. Ann Transl Med. 2021;9:817.
18. Leung KH, Rowe SP, Pomper MG, et al. A three-stage, deep learning, ensemble approach for prognosis in patients with Parkinson's disease. EJNMMI Res. 2021;11:52.
Keywords:

CR; volume rendering; VR; virtual reality; HoloLens

Copyright © 2022 Wolters Kluwer Health, Inc. All rights reserved.