Improving efficiencies here and now with VR and 3D

April 12, 2019
by Sean Ruck , Contributing Editor
Some of Dr. Brian Park’s professional work seems like a contradiction. He works with virtual reality to improve process in actual reality.

Park taught himself coding in Unity and C# and also took a deep dive into a variety of 3D image processing applications to become adept. He estimates more than 150 hours spent self-teaching. “I remember a lot of late nights programming and debugging code during residency,” he said.

A doctor today, Park’s first love was engineering. He went to college for a graduate degree in the field and later, worked for Medtronic and IBM in that capacity. He later transitioned into medicine, graduating medical school in 2014.

At the time, healthcare professionals were buzzing about 3D printing in medicine. However, once AR/VR headsets started to appear on the commercial market, Park considered the benefits and advantages of visualizing 3D objects using augmented and virtual reality instead. Park went to Dr. Steven Horii, director of medical informatics in Radiology at Penn Medicine and applied for a HoloLens Development Edition in the spring of 2016. He joined the Penn Image-Guided Interventions Lab and began working on his project which led to the poster presentation, “Low-cost Workflow of Generating and Remotely Visualizing 3D Holographic Models for Clinical Practice using Augmented or Virtual Reality," resulting in a second place award at SIIM 2018 for Park and his team.

“We are now projecting 3D holographic models of imaging onto patients during live procedures. Some of our cases can be seen at www.med.upenn.edu/ar4ir,” he said.

The renderings used to create the models use standard DICOM format pulled from PACS, and according to Park, the workflow presented can be used with any system. “There are various open-source and freeware applications available to make AR/VR models from medical imaging. But to our knowledge, there is no freely available, unified software application to generate these models. Multiple different programs and file conversions are currently needed to make a model. You can, of course, pay for proprietary AR/VR software packages but these usually come at a steep cost,” he explained.

For a hospital making the effort to pull in the different freeware needed and pair it with its own existing imaging equipment and PACS, the only cost involved would be that of the headset. Park says popular AR headsets such as the Magic Leap One or the Microsoft HoloLens 2 run about $2,300 to $3,500 respectively and are computers in and of themselves. VR headsets are available in the hundreds of dollars range, but require a physical connection to a computer and they don’t allow for interaction with real patients while they’re being used.

Unlike some new technology, AR/VR devices have emerged at a time where user experience is a top consideration. Therefore, they’re very user-friendly. In Park’s experience, it takes people only minutes to become proficient in controlling the devices, but he admits adoption may move slowly as people are convinced to abandon the keyboards, mice and 2D monitors they’ve used for decades.

Renderings can be created from any cross-sectional (3D) imaging, with CT and MR being the most common, but models using molecular imaging like PET are also possible.

The time it takes to create a rendering is wholly dependent upon the type of rendering desired. “Detailed surface-rendered models, or indirect volume rendering, can take one to two hours to generate by manual labor. Direct volume rendering takes less than 30 seconds to generate and does not require any manual labor, but you usually need powerful processors. There are advantages and disadvantages between the two types of rendering, and we’re exploring and evaluating both methods,” Park said.

Plans for improvements include making models easier to manipulate by using an Xbox controller as well as automating registration using computer vision and image-based markers.

Acknowledging there’s only so much time in a day and being a developer is challenging enough without also being a full-time resident, Park has reached out to former medical school colleagues with Medivis, a startup focused on AR in medicine, to collaborate with.

As far as the future, Park says he hopes “to establish an AR-assisted navigation system that accurately tracks and registers 3D holographic volumes onto patients for real-time, virtual procedural guidance. The improved anatomic understanding and spatial localization provided by AR can improve patient safety and outcomes, increase procedural efficiency and technical success, and reduce the amount of anesthesia or X-rays utilized during the procedure.”

The intent is to enlist the help of AR to make procedures safer and allow them to be performed with greater confidence, all while improving efficiency and reducing costs.

More IT Matters

Pixel perfect – A new approach to annotation software

How AI can change radiology practice for the better

Bringing a ‘hive mind’ approach to AI in radiology

IT Matters: Optimizing radiation therapy plans with AI

The promise of AI (part 2 of 2)

More Industry Headlines

Future-ready medical devices - Answering the current and future needs of patients and users

Fivefold difference found in Medicaid reimbursements for radiotherapy

Louisiana getting $14 million Center for Molecular Imaging and Therapy

First ultra high-res CT scan performed on US patient

Innovations sparking quantum leap in remote cardiac monitoring technology

Canon's Vital Images wins DoD contract

Aidoc announces $27 million in VC funding to advance AI in imaging

Decision support software could reduce scans by 6 percent: MIT researchers

Purchasing insights for cardiac ultrasound

Joint Commission fluoro mandate may confuse providers, say experts