Neuroglancer
WebGL-based viewer for volumetric data
Install / Use
/learn @google/NeuroglancerREADME
Neuroglancer: Web-based volumetric data visualization
Neuroglancer is a WebGL-based viewer for volumetric data. It is capable of displaying arbitrary (non axis-aligned) cross-sectional views of volumetric data, as well as 3-D meshes and line-segment based models (skeletons).
Refer to the documentation website at https://neuroglancer-docs.web.app for more details.
This is not an official Google product.
Examples
A live demo is hosted at https://neuroglancer-demo.appspot.com. (The prior link opens the viewer without any preloaded dataset.) Use the viewer links below to open the viewer preloaded with an example dataset.
The four-pane view consists of 3 orthogonal cross-sectional views as well as a 3-D view (with independent orientation) that displays 3-D models (if available) for the selected objects. All four views maintain the same center position. The orientation of the 3 cross-sectional views can also be adjusted, although they maintain a fixed orientation relative to each other. (Try holding the shift key and either dragging with the left mouse button or pressing an arrow key.)
-
FlyEM Hemibrain (8x8x8 cubic nanometer resolution). <a href="https://hemibrain-dot-neuroglancer-demo.appspot.com/#!gs://neuroglancer-janelia-flyem-hemibrain/v1.0/neuroglancer_demo_states/base.json" target="_blank">Open viewer</a>
-
FAFB-FFN1 Full Adult Fly Brain Automated Segmentation (4x4x40 cubic nanometer resolution). <a href="https://neuroglancer-demo.appspot.com/fafb.html#!gs://fafb-ffn1/main_ng.json" target="_blank">Open viewer</a>
-
Kasthuri et al., 2014. Mouse somatosensory cortex (6x6x30 cubic nanometer resolution). <a href="https://neuroglancer-demo.appspot.com/#!{'layers':{'original-image':{'type':'image'_'source':'precomputed://gs://neuroglancer-public-data/kasthuri2011/image'_'visible':false}_'corrected-image':{'type':'image'_'source':'precomputed://gs://neuroglancer-public-data/kasthuri2011/image_color_corrected'}_'ground_truth':{'type':'segmentation'_'source':'precomputed://gs://neuroglancer-public-data/kasthuri2011/ground_truth'_'selectedAlpha':0.63_'notSelectedAlpha':0.14_'segments':['3208'_'4901'_'13'_'4965'_'4651'_'2282'_'3189'_'3758'_'15'_'4027'_'3228'_'444'_'3207'_'3224'_'3710']}}_'navigation':{'pose':{'position':{'voxelSize':[6_6_30]_'voxelCoordinates':[5523.99072265625_8538.9384765625_1198.0423583984375]}}_'zoomFactor':22.573112129999547}_'perspectiveOrientation':[-0.004047565162181854_-0.9566211104393005_-0.2268827110528946_-0.1827099621295929]_'perspectiveZoom':340.35867907175077}" target="_blank">Open viewer.</a>
This dataset was copied from https://neurodata.io/data/kasthuri15/ and is made available under the Open Data Common Attribution License. Paper: <a href="http://dx.doi.org/10.1016/j.cell.2015.06.054" target="_blank">Kasthuri, Narayanan, et al. "Saturated reconstruction of a volume of neocortex." Cell 162.3 (2015): 648-661.</a>
-
Janelia FlyEM FIB-25. 7-column Drosophila medulla (8x8x8 cubic nanometer resolution). <a href="https://neuroglancer-demo.appspot.com/#!{'layers':{'image':{'type':'image'_'source':'precomputed://gs://neuroglancer-public-data/flyem_fib-25/image'}_'ground-truth':{'type':'segmentation'_'source':'precomputed://gs://neuroglancer-public-data/flyem_fib-25/ground_truth'_'segments':['21894'_'22060'_'158571'_'24436'_'2515']}}_'navigation':{'pose':{'position':{'voxelSize':[8_8_8]_'voxelCoordinates':[2914.500732421875_3088.243408203125_4045]}}_'zoomFactor':30.09748283999932}_'perspectiveOrientation':[0.3143535554409027_0.8142156600952148_0.4843369424343109_-0.06040262430906296]_'perspectiveZoom':443.63404517712684_'showSlices':false}" target="_blank">Open viewer.</a>
This dataset was copied from https://www.janelia.org/project-team/flyem/data-and-software-release, and is made available under the Open Data Common Attribution License. Paper: <a href="http://dx.doi.org/10.1073/pnas.1509820112" target="_blank">Takemura, Shin-ya et al. "Synaptic Circuits and Their Variations within Different Columns in the Visual System of Drosophila." Proceedings of the National Academy of Sciences of the United States of America 112.44 (2015): 13711-13716.</a>
-
Example of viewing 2D microscopy (coronal section of rat brain at 325 nanometer resolution). <a href="https://neuroglancer-demo.appspot.com/#!%7B%22dimensions%22:%7B%22x%22:%5B1e-9%2C%22m%22%5D%2C%22y%22:%5B1e-9%2C%22m%22%5D%7D%2C%22position%22:%5B10387071%2C5347131%5D%2C%22crossSectionScale%22:263.74955563693914%2C%22projectionScale%22:65536%2C%22layers%22:%5B%7B%22type%22:%22image%22%2C%22source%22:%7B%22url%22:%22deepzoom://https://data-proxy.ebrains.eu/api/v1/buckets/localizoom/14122_mPPC_BDA_s186.tif/14122_mPPC_BDA_s186.dzi%22%2C%22transform%22:%7B%22outputDimensions%22:%7B%22x%22:%5B1e-9%2C%22m%22%5D%2C%22y%22:%5B1e-9%2C%22m%22%5D%2C%22c%5E%22:%5B1%2C%22%22%5D%7D%2C%22inputDimensions%22:%7B%22x%22:%5B3.25e-7%2C%22m%22%5D%2C%22y%22:%5B3.25e-7%2C%22m%22%5D%2C%22c%5E%22:%5B1%2C%22%22%5D%7D%7D%7D%2C%22tab%22:%22rendering%22%2C%22shader%22:%22void%20main%28%29%7BemitRGB%28vec3%28toNormalized%28getDataValue%280%29%29%2CtoNormalized%28getDataValue%281%29%29%2CtoNormalized%28getDataValue%282%29%29%29%29%3B%7D%22%2C%22channelDimensions%22:%7B%22c%5E%22:%5B1%2C%22%22%5D%7D%2C%22name%22:%2214122_mPPC_BDA_s186.dzi%22%7D%5D%2C%22selectedLayer%22:%7B%22layer%22:%2214122_mPPC_BDA_s186.dzi%22%7D%2C%22layout%22:%22xy%22%7D" target="_blank">Open viewer.</a> (Use <kbd>Ctrl</kbd>+<kbd>MouseWheel</kbd> to zoom out)
This image is part of: Olsen et al., 2020. Anterogradely labeled axonal projections from the posterior parietal cortex in rat [Data set]. EBRAINS. https://doi.org/10.25493/FKM4-ZCC
Supported data sources
Neuroglancer itself is purely a client-side program, but it depends on data being accessible via HTTP in a suitable format. It is designed to easily support many different data sources, and there is existing support for the following data APIs/formats:
- Neuroglancer precomputed format
- N5
- Zarr v2/v3 and OME-Zarr 0.4/0.5
- Python in-memory volumes (with automatic mesh generation)
- BOSS https://bossdb.org/
- DVID https://github.com/janelia-flyem/dvid
- Render https://github.com/saalfeldlab/render
- Single NIfTI files https://www.nitrc.org/projects/nifti
- Deep Zoom images
Supported browsers
- Chrome >= 51
- Firefox >= 46
- Safari >= 15.0
Keyboard and mouse bindings
For the complete set of bindings, see
src/ui/default_input_event_bindings.ts,
or within Neuroglancer, press h or click on the button labeled ? in the upper right corner.
-
Click on a layer name to toggle its visibility.
-
Double-click on a layer name to edit its properties.
-
Hover over a segmentation layer name to see the current list of objects shown and to access the opacity sliders.
-
Hover over an image layer name to access the opacity slider and the text editor for modifying the rendering code.
Troubleshooting
-
Neuroglancer doesn't appear to load properly.
Neuroglancer requires WebGL (2.0) and the
EXT_color_buffer_floatextension.To troubleshoot, check the developer console, which is accessed by the keyboard shortcut
control-shift-iin Firefox and Chrome. If there is a message regarding failure to initialize WebGL, you can take the following steps:-
Chrome
Check
chrome://gputo see if your GPU is blacklisted. There may be a flag you can enable to make it work. -
Firefox
Check
about:support. There may be webgl-related properties inabout:configthat you can change to make it work. Possible settings:webgl.disable-fail-if-major-performance-caveat = truewebgl.force-enabled = truewebgl.msaa-force = true
-
-
Failure to access a data source.
As a security measure, browsers will in many cases prevent a webpage from accessing the true error code associated with a failed HTTP request. It is therefore often necessary to check the developer tools to see the true cause of any HTTP request error.
There are several likely causes:
-
Cross-origin resource sharing (CORS)
Neuroglancer relies on cross-origin requests to retrieve data from third-party servers. As a security measure, if an appropriate
Access-Control-Allow-Originresponse header is not sent by the server, browsers prevent webpages from accessing any information about the response from a cross-origin request. In order to make the data accessible to Neuroglancer, you may need to change the cross-origin request sharing (CORS) configuration of the HTTP server. -
Accessing an
http://resource from a Neuroglancer client hosted at anhttps://URLAs a security measure, recent versions of Chrome and Firefox prohibit webpages hosted at
https://URLs from issuing requests tohttp://URLs. As a workaround, you can use a Neuroglancer client hosted at ahttp://URL, e.g. the demo client running at http://neuroglancer-demo.appspot.com, or one running on localhost. Alternatively, you can start Chrome with the--disable-web-securityflag, but that should be done only with extreme caution. (Make sure to use a separate profile, and do not access a
-
Related Skills
node-connect
341.8kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
84.6kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
341.8kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
commit-push-pr
84.6kCommit, push, and open a PR
