SkillAgentSearch skills...

Com.walcht.ctvisualizer

High performance, out-of-core CT/MRI volume visualizer for very large datasets using the Unity game engine

Install / Use

/learn @walcht/Com.walcht.ctvisualizer
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

Unity CTVisualizer

A Unity3D package/plugin for efficiently visualizing and manipulating very large (in the range of 100GBs) CT/MRI volumetric datasets. The package comes with a set of samples for different target platforms (e.g., desktop, Magic Leap 2).

![Unity CT Visualizer Snake Dataset Showcase][ctvisualizer-snake-showcase]

This project started as an implementation of the latest state-of-the-art direct volume rendering techniques adjusted for immersive environments for my M.Sc. of Computer Science thesis.

Showcase

<details> <summary>Show videos (slows down rendering and scrolling)</summary>

Brick granularity loading to the GPU (using TextureSubPlugin):

https://github.com/user-attachments/assets/5cee257e-f52a-488a-be92-07cfdbdcb2d6

Camera going through the bbox is also supported:

https://github.com/user-attachments/assets/17d0eaa1-86be-46ac-8c04-e64828dd9006

1D transfer function UI with serialization/deserialization support:

https://github.com/user-attachments/assets/4e02b802-0175-4c3d-af5e-e04a65e49034

LoD (level of details) optimization technique of CTVisualizer:

https://github.com/user-attachments/assets/4d10bece-6b18-41a2-afb3-690c7837de49

</details>

Out-of-core rendering of the Enigma dataset (about 8.00GBs) using around 600MBs of VRAM:

![Unity CT Visualizer Enigma Dataset Showcase][ctvisualizer-enigma-showcase]

Out-of-core rendering of the turtle dataset (about 1.40GBs) using around 440MBs of VRAM:

![Unity CT Visualizer Turtle Dataset Showcase][ctvisualizer-turtle-showcase]

In-core rendering of the Fish dataset (about 300MBs):

![Unity CT Visualizer Fish Dataset Showcase][ctvisualizer-fish-showcase]

Installation & Build Instructions

The project is provided as a separate Unity package that can be easily added to your project by:

  1. Window -> Package Manager -> (Top left + icon) -> Install package from Git URL -> Add this repo's link:

    https://github.com/walcht/com.walcht.ctvisualizer.git
    

    After having imported CTVisualizer, you may encounter some missing dependency(ies) issues. Make sure to close and reopen the Unity editor to trigger a custom resolver for the Git package dependencies (Unity's default package manager does not support Git packages. Yeah, you read that right...). In case the missing dependency packages are not resolved, navigate to package.json -> git-dependencies and install them manually.

  2. This project makes use of a [native C++ rendering plugin][1] to augment Unity's limited graphics API to be able to create larger-than-2GBs textures and upload chunks to them. Follow the instructions in [TextureSubPlugin][1] to compile the plugin for your target platform (Windows, Linux, MagicLeap2, or Android).

  3. CTVisualizer expects input datasets in the form of [Chunked Volumetric DataSet (CVDS)][2]. A separate, offline, Python CVDS converter is needed and can be installed from [here][2].

Tested on these Unity versions for these target platforms:

| Unity Version | Host Platform | Target Platform | Status | Notes | | ------------- | -------------- | --------------- | ------------------ | ------------------------------------------------- | | 6000.0.40f1 | Windows 10 | Windows 10 | :white_check_mark: | | | 6000.0.40f1 | Ubuntu 22.04.5 | Ubuntu 22.04.5 | TODO | | | 6000.0.40f1 | Ubuntu 22.04.5 | Magic Leap 2 | :white_check_mark: | might get a black screen - see Known Issues below | | 6000.0.40f1 | Windows 10 | Magic Leap 2 | :white_check_mark: | |

Build Instructions for the Magic Leap 2 Platform

To build for the Magic Leap 2 AR device:

  1. Connect the device to a machine (preferably Windows-based<sup>1</sup>)

  2. Follow the instructions on this [repository][1] to build the native plugin for the ML2 device

  3. Import the magicleap2 sample scene

  4. Install the Magic Leap 2 SDK package dependency from: https://github.com/magicleap/MagicLeapUnitySDK.git

  5. Switch to the magicleap2 build profile (you can find the build profile asset in the Settings folder of the imported sample)

  6. Check the OpenXR project validator for potential issues and fix them

  7. Build the project (of course, don't forget to add the magicleap2 scene)

  8. After having finished the build process, navigate to the build directory and run:

    adb install ctvisualizer.x86_64.apk
    

    or, if you are on a Windows platform, you can simply use the Magic Leap Hub to install it through the GUI.

  9. Copy your converted CVDS dataset(s) into the [Application.persistentDataPath][3] on the attached ML2 device using:

    adb push <path-to-cvds-dataset-folder> /storage/emulated/0/Android/data/com.walcht.ctvisualizer/files/
    
  10. You can also optionally copy other resources to the same directory above such as: serialized transfer functions, serialized visualization parameters, etc.

  11. Run the just-installed ctvisualizer app on the ML2<sup>2</sup>

  12. You can control the volumetric object using hand gestures such as grasping and pinching for rotating and scaling the object, respectively.


<sup>1</sup>: See Known Issues for a Linux host platform.

<sup>2</sup>: For debugging potential issues on the ML2, before starting the ctvisualizer app, run:

adb shell logcat | grep "Unity"

Make sure to keep an eye for errors and exceptions (especially OpenXR-related thrown exceptions)

Usage

UnityCT-Visualizer is a UI-centric application - all operations are mainly done through the provided GUI. To visualize a CT/MRI dataset using CTVisualizer, you have to:

  1. Convert your dataset into CVDS format using the [CVDS Python converter package][1].

  2. Copy/Move the converted CVDS dataset into your [Application.persistentDataPath][3].

  3. Click on SELECT to select a CVDS dataset from the Application.persistentDataPath.

  4. Adjust the pipeline parameters (these are runtime-constant parameters) and optionally the debugging parameters.

  5. Click on VISUALIZE to start the visualization process of the selected CVDS dataset.

  6. A volumetric object should appear alongside additional UI components (Metadata UI component)

  7. In the Visualization Parameters UI component, choose the transfer function (currently only 1D is supported) and adjust runtime visualization parameters (e.g., you can change the interpolation method - choose trillinear for best quality).

  8. The default TF is a 1D transfer function. A 1D Transfer Function UI component should be visible in the bottom of the screen:

    • Green line is for opacities (i.e., alpha) classification
    • Bottom gradient color band/texture is for colors (no alpha) classification
    • Changes are reflected realtime in the volumetric object visualization

Render Modes

CTVisualizer comes with a set of state-of-the-art rendering modes (i.e., different shaders) that might be suitable for different input dataset characteristics (e.g., size, sparsity/homogeneity, anisotropy, etc.). Since the target dataset size is in the range of hundreds of GBs, a lot has to be done in the Shaders and CPU-side code to efficiently handle CPU-GPU communications. This has the unfortunate side effect of adding a lot of complexity.

The rendering modes are:

DVR In Core (IC) Rendering Mode

Useful for datasets that fit within the available VRAM on the GPU. Employs no empty space skipping acceleration structures. This is mainly used as a baseline to compare the performance of other rendering methods against. Consequently, this is by far the simplest shader and sometimes the fastest (especially for small datasets).


All DVR rendering modes implement this basic raymarching algorithm (OOC rendering modes adjust it so that LoDs can be used):

basic raymarching technique

Assuming a perspective camera, blue points on the near clipping plane are fragment centers. A view ray is cast through each of these points. Blue line is an example of a cast view ray that computes the color of its fragment f. Blue points on the blue line are volume-ray intersection sample points. Green points refer to sample points that should, ideally, contribute to the final color of fragment f. Red points are sample points that should not contribute to the final color of fragment f.


The implementation of this basic IC rendering mode can be found in this shader: ic_dvr_shader

DVR Out-of-Core (OOC) Virtual Memory (VM) Rendering Mode

Employs a software-implemented virtual memory scheme (analogous to that employed by operating systems) and a multi-resolution, single-level (multi-level support is not yet implemented) page table hierarchy. Granularity of empty space skipping and adaptive ray sampling is at the level of page table entries.


OOC rendering modes implement this LoD-based raymarching technique (ideally with trilinear interpolation):

LoD-based raymarching technique employed by OOC rendering modes

The blue line is a cast view ray that computes the color of its fragment f. Blue points on that line are volume-ray intersection sample points. Light grey dotted lines denote a brick’s spatial extent with a co

View on GitHub
GitHub Stars6
CategoryDevelopment
Updated16d ago
Forks0

Languages

C#

Security Score

85/100

Audited on Mar 20, 2026

No findings