Brainchop
Brainchop: In-browser 3D MRI rendering and segmentation
Install / Use
/learn @neuroneural/BrainchopREADME
Brainchop

<div align="center">
<a href="https://neuroneural.github.io/brainchop">
<img width="100%" src="https://github.com/neuroneural/brainchop/releases/download/v3.4.0/Banner.png">
</a>
Frontend For Neuroimaging. Open Source
brainchop.org Updates Doc News! Cite v3
</div> <br> <img src="https://github.com/neuroneural/brainchop/blob/master/css/logo/brainchop_logo.png" width="25%" align="right"> <p align="justify"> <b><a href="https://neuroneural.github.io/brainchop/" style="text-decoration: none"> Brainchop</a></b> brings automatic 3D MRI volumetric segmentation capability to neuroimaging by running a lightweight deep learning model (e.g., <a href="https://medium.com/pytorch/catalyst-neuro-a-3d-brain-segmentation-pipeline-for-mri-b1bb1109276a" target="_blank" style="text-decoration: none"> MeshNet</a>) in the web-browser for inference on the user side. </p> <p align="justify"> We make the implementation of brainchop freely available, releasing its pure javascript code as open-source. The user interface (UI) provides a web-based end-to-end solution for 3D MRI segmentation. <b><a href="v" style="text-decoration: none">NiiVue</a></b> viewer is integrated with the tool for MRI visualization. For more information about Brainchop, please refer to this detailed <b><a href="https://github.com/neuroneural/brainchop/wiki/" style="text-decoration: none">Wiki</a></b> and this <b><a href="https://trendscenter.org/in-browser-3d-mri-segmentation-brainchop-org/" style="text-decoration: none"> Blog</a></b>.For questions or to share ideas, please refer to our <b><a href="https://github.com/neuroneural/brainchop/discussions/" style="text-decoration: none"> Discussions </a></b> board.
</p> <div align="center">
Brainchop high-level architecture
</div> <div align="center">
MeshNet deep learning architecture used for inference with Brainchop (MeshNet <a href="https://arxiv.org/pdf/1612.00940.pdf" target="_blank" style="text-decoration: none"> paper</a>)
</div>MeshNet Example
This basic example provides an overview of the training pipeline for the MeshNet model.
<br>Live Demo
To see Brainchop v4 in action please click here. Or click on the gif below to see a video:
<div align="center"> </div>For v3 click here.
<br>Updates
<div align="center"> <img src="https://github.com/neuroneural/brainchop/releases/download/v4.0.0/Brainchop_Niivue.png" width="100%">Brainchop <a href= "https://neuroneural.github.io/brainchop/" target="_blank" style="text-decoration: none"> v4 </a> with <a href= "https://github.com/niivue/niivue" target="_blank" style="text-decoration: none"> NiiVue</a> viewer
</div> <br> <div align="center"> <img src="https://github.com/neuroneural/brainchop/releases/download/v3.4.0/BrainchopMoreRobustModels.gif" width="60%">Brainchop <a href= "https://neuroneural.github.io/brainchop/v3" target="_blank" style="text-decoration: none"> v3 </a> with more robust models
</div> <br> <div align="center">
Brainchop <a href= "https://neuroneural.github.io/brainchop/v3" target="_blank" style="text-decoration: none"> v1.4.0 - v3.4.0 </a> rendering MRI Nifti file in 3D
</div> <br> <div align="center">
Brainchop <a href= "https://neuroneural.github.io/brainchop/v3" target="_blank" style="text-decoration: none"> v1.3.0 - v3.4.0 </a> rendering segmentation output in 3D
</div>News!
- Brainchop v2.2.0 paper is accepted in the 21st IEEE International Symposium on Biomedical Imaging (ISBI 2024). Lengthy arXiv version can be found here.
- Brainchop paper is published in the Journal of Open Source Software (JOSS) on March 28, 2023.
- Brainchop abstract is accepted for poster presentation during the 2023 OHBM Annual Meeting.
- Brainchop 1-page abstract and poster is accepted in 20th IEEE International Symposium on Biomedical Imaging (ISBI 2023)
- Brainchop invited to Pytorch flag conference, New Orleans, Louisiana (Dec 2022)
- Brainchop invited to TensorFlow.js Show & Tell episode #7 (Jul 2022).
Citation
Brainchop paper for v2.1.0 is published on March 28, 2023, in the Journal of Open Source Software (JOSS)
For APA style, the paper can be cited as:
<br>Masoud, M., Hu, F., & Plis, S. (2023). Brainchop: In-browser MRI volumetric segmentation and rendering. Journal of Open Source Software, 8(83), 5098. https://doi.org/10.21105/joss.05098
For BibTeX format that is used by some publishers, please use:
@article{Masoud2023,
doi = {10.21105/joss.05098},
url = {https://doi.org/10.21105/joss.05098},
year = {2023},
publisher = {The Open Journal},
volume = {8},
number = {83},
pages = {5098},
author = {Mohamed Masoud and Farfalla Hu and Sergey Plis},
title = {Brainchop: In-browser MRI volumetric segmentation and rendering},
journal = {Journal of Open Source Software}
}
<br>
For MLA style:
<br>Masoud, Mohamed, Farfalla Hu, and Sergey Plis. ‘Brainchop: In-Browser MRI Volumetric Segmentation and Rendering’. Journal of Open Source Software, vol. 8, no. 83, The Open Journal, 2023, p. 5098, https://doi.org10.21105/joss.05098.
For IEEE style:
<br>M. Masoud, F. Hu, and S. Plis, ‘Brainchop: In-browser MRI volumetric segmentation and rendering’, Journal of Open Source Software, vol. 8, no. 83, p. 5098, 2023. doi:10.21105/joss.05098
Contribution and Authorship Guidelines
If you modify or extend Brainchop in a derivative work intended for publication (such as a research paper or software tool), please cite and acknowledge the original Brainchop project and the original authors. Proper acknowledge should include the following:
"Brainchop, originally developed by Mohamed Masoud and Sergey Plis (2023), was used in the development of this work."
We also request that significant contributions to derivative works be recognized by including original authors as co-authors, where appropriate.
<br>Funding
This work was funded by the NIH grant RF1MH121885. Additional support from NIH R01MH123610, R01EB006841 and NSF 2112455.
<br /> <div align="center">Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
openclaw-plugin-loom
Loom Learning Graph Skill This skill guides agents on how to use the Loom plugin to build and expand a learning graph over time. Purpose - Help users navigate learning paths (e.g., Nix, German)
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).

