Quickvoxelcore
Toolkit to display brain volumes (NIfTI, MINC2) with WebGL2, featuring obliques, colormaps, overlay, world coordinates, multiple cameras, etc.
Install / Use
/learn @Pixpipe/QuickvoxelcoreREADME

Quickvoxel Core is a pure Javascript toolkit for volumetric visualization of neuro files in the web browser. Everything that happens in Quickvoxel is strictly client-side, without the need of an active server (i.e. can run on a Github page)
Features:
- Open and decodes NIfTI, MINC2, and MGH (experimental)
- Display volume in world/subject coordinates to align registered volumes
- Obliques
- Can blend two volumes with different methods
- Apply colormaps (44 available)
- Adjust contrast and brightness
Requirement:
- A modern web browser, compatible with WebGL2 (recent Chrome or Firefox)
Quickvoxel Core is backed by Pixpipe for decoding volume files and process the data, and by BabylonJS for the WebGL2 rendering.
Since this project is a core only, it is not bound to any frontend framework and needs to be sugar coated with some UI elements to provide a proper user interaction. You can find a minimal 10-lines example here (source).
A lot of additional methods to do more interesting things with Quickvoxel are implemented in the core and need to be tied to UI element to be fully usable. We'll see that in the following part.
Demo
(Most of the demos are less than 30 lines)
- Simple with loading from URL - source
- Simple with loading from URL, with a loading spinner and events - source
- Simple with loading from a local file - source
- Translate the plane - source
- Oblique plane - source
- Show/hide axes - source
- With colormaps - source
- Oblique plane, animated - source
- Two volumes + blending + colormap - source
- Two volumes + blending + colormap + loading spinner - source
- + time series animated - source
- + animated translation - source
- + animated oblique - source
- Changing cameras automatically (simple) - source
- Changing cameras and having view control - source
In addition QuickGui (source) is a more advanced project, developed for the #BrainHack2018 in Montreal. It uses some features of Quickvoxel Core with a simple and accessible UI.
API documentation
Install
Since Quickvoxel Core will most likely be used as a dependency, it can be used in multiple ways:
From a simple HTML page:
<!-- ES6 version -->
<script src="quickvoxelcore/dist/quickvoxelcore.es6.js"></script>
<!-- or ES5 version -->
<script src="quickvoxelcore/dist/quickvoxelcore.js"></script>
<!-- or ES5 minified version -->
<script src="quickvoxelcore/dist/quickvoxelcore.min.js"></script>
From another ES module:
npm install quickvoxelcore --save
Then, from your module:
// import the ES5 version
import quickvoxelcore from 'quickvoxelcore'
// or import the ES6 version
import quickvoxelcore from 'quickvoxelcore/dist/quickvoxelcore.es6.js'
How To
Getting started
To start, QuickvoxelCore needs an HTML5 canvas element:
<html>
<head>
<title>QuickvoxelCore Test</title>
<style>
body {
overflow: hidden;
width: 100%;
height: 100%;
margin: 0;
}
#renderCanvas {
width: 100%;
height: 100%;
}
</style>
</head>
<body>
<script src="../dist/quickvoxelcore.es6.js"></script>
<canvas id="renderCanvas"></canvas>
<script>
let canvas = document.getElementById("renderCanvas")
// ...
</script>
</body>
</html>
No matter the way you pick (simple HTML page or ES module to be bundled), the features are accessible from the quickvoxelcore namespace:
let canvas = document.getElementById("renderCanvas")
let qvc = new quickvoxelcore.QuickvoxelCore( canvas )
The constructor quickvoxelcore.QuickvoxelCore(...) initializes several internal objects, three important ones can then be fetched:
- the
VolumeCollection - the
RenderEngine - the
CameraCrew
// ...
let qvc = new quickvoxelcore.QuickvoxelCore( canvas )
let volumeCollection = qvc.getVolumeCollection()
let renderEngine = qvc.getRenderEngine()
let camcrew = qvc.getCameraCrew()
Though, before launching your main app, if can be nice to check if QuickvoxelCore is running in a WebGL2 compatible environment. We have a function for that:
// test compatibility with WebGL2
if (!quickvoxelcore.webGL2()){
alert( 'Quickvoxel Core cannot run here because this web browser is not compatible with WebGL2.' )
} else {
// launch your app here
}
Interlude: the VolumeCollection
The VolumeCollection instance allows you to add new volume from file URL or from a file dialog. Once added, a volume file will automatically:
- be given a unique ID within the collection
- be parsed by Pixpipe
- create a 3D texture for later display
The methods you will use from your VolumeCollection instance are:
.addVolumeFromUrl( String )to add a volume from a URL.addVolumeFromFile( File)to add a volume from a file in the local filesystem
In addition, VolumeCollection provides some events so that actions can be triggered during the lifecycle of a Volume:
volumeAddedis called when the volume is parsed and added to the collection. But its webGL texture is not ready yet! The callbacks attached to this event will have the volume object as argument.volumeReadycalled aftervolumeAdded, at the moment the added volume has its WegGL 3D texture ready. At this stage, a volume is ready to be displayed.The callbacks attached to this event will have the volume object as argument.volumeRemovedis called when a volume is removed from the collection with the method.removeVolume(id). The callbacks attached to this event will have the volume id (string) as argument.errorAddingVolumeis called when a volume failed to be added with.addVolumeFromUrl()and.addVolumeFromFile(). The callbacks attached to this event will have the url or the HTML5 File object as argument.
To each event can be attached multiple callbacks, they will simply be called successively in the order the were declared. To associate a callback function to an event, just do:
myVolumeCollection.on("volumeReady", function(volume){
// Do something with this volume
})
In general, events are most likely to be defined from the main scope or from where you also have access to the RenderEngine instance.
VolumeCollection has plenty of methods, get the full description here. You may also want to check the documentation of the Volume class here.
Interlude: the RenderEngine
The RenderEngine instance is in charge of displaying the volume from the collection, once they are loaded. It also comes with all the features to rotates/translates the three orthogonal planes (referred as _planeSystem in the source), apply a colormaps, change brightness/contrast and deal with blending.
A RenderEngine can display only 2 volumes at the same time. The terminology used in the doc and source is
Two slots are available to mount volumes on the render engine. Those slots are called primary and secondary.
Then, some volume can be unmounted from a given slot and another volume from the volume collection can be mounted.
Rendering features such as colormap, contrast and brightness are associated to slots and not to volumes. This means, i
Related Skills
node-connect
349.9kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
109.8kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
349.9kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
349.9kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
