SHiVAi
The SHiVAi pipeline for preprocessing, AI-based segmentation, and reporting of cCVSD biomarkers from brain MRI images
Install / Use
/learn @pboutinaud/SHiVAiREADME
SHiVAi: SHiVA preprocessing and deep learning segmentation workflow
<img src="src/shivai/postprocessing/logo_shiva.png" align="right" width="100px"/>The shivai package includes a set of image analysis tools for the study of covert cerebral small vessel diseases (cCSVD) with structural Magnetic Resonance Imaging. More specifically, it installs SHiVAi, the full pipeline for preprocessing, AI-based segmentation, and reporting of cCVSD biomarkers.
The SHiVAi segmentation tools currently include Cerebral MicroBleeds (CMB), PeriVascular Spaces (PVS) (also known as Virchow Robin Spaces - VRS), White Matter Hyperintensities (WMH), and Lacunas. The 3D-Unet model weights are available separately at https://github.com/pboutinaud.
The tools cover preprocessing (image resampling and cropping to match the required size for the deep learning models, coregistration for multimodal segmentation tools), automatic segmentation, and reporting (QC and results). It accepts both Nifti and DICOM images as input (see the possible input structures for more details).
<br clear="right"/>cCSVD biomarkers detected with SHiVAi:

Intellectual property & Licencing
All the content, code, and ressources of this repository, has been registered at the french 'Association de Protection des Programmes' under the number: IDDN.FR.001.410014.000.S.P.2024.000.21000
The Shivai pipeline and all the repository content is provided under the GNU Affero General Public License version 3 (Affero GPLv3) or more recent. <img src="https://www.gnu.org/graphics/agplv3-155x51.png" align="right" width="100px"/>
Index
- Dependencies and hardware requirements
- Package Installation
- Running the process
- Results
- Data structures accepted by SHiVAi
- Additional info
Dependencies and hardware requirements
The SHiVAi application requires a Linux machine with a GPU (with 16GB of dedicated memory).
The deep-learning models relies on Tensorflow 2.7.13. The processing pipelines are implemented with Nipype and make use of ANTs (Copyright 2009-2023, ConsortiumOfANTS) for image registration and Quickshear (Copyright 2011, Nakeisha Schimke) for defacing. Quality control reporting uses (among others) DOG contours PyDog (Copyright 2021, Chris Rorden). Building and/or using the container image relies on Apptainer (https://apptainer.org). More details about Apptainer in the Apptainer image section and our Appatainer readme file.
Beware, the current version of the pipeline (version 0.5.*) is no longer compatible with models saved as .h5 files. Get the latest models (in SavedModel format) to avoid problems.
Package Installation
Depending on your situation you may want to deploy SHiVAi in different ways:
- Fully contained process: The simplest approach. All the computation is done through the Apptainer image. It accounts for most of the local environment set-up, which simplifies the installation and ensure portability. However the process in run linearly (no parallelization of the different steps).
- Traditional python install: does not require apptainer as all the dependencies will have to be installed locally. Useful for full control and development of the package, however it may lead to problems due to the finicky nature of TensorFlow and CUDA.
- Mixed approach: Local installation of the package without TensorFlow (and so without troubles), but using the Apptainer image to run the deep-learning processes (using TensorFlow). Ideal for parallelization of the processes and use on HPC clusters.
Trained AI model
In all the mentioned situations, you will also need to obtain the trained deep-learning models you want to use (for PVS, WMH, CMB, and Lacuna segmentation). They are available at https://github.com/pboutinaud
All the models must be stored in a common folder whose path must also be filled in the model_path variable of the config file (see point 4 of Fully contained process).
Let's consider that you stored them in /myHome/myProject/Shiva_AI_models for the following parts.
Each model must also be paired with a model_info.json file. These json files should be available on the same repository as their corresponding model.
Note that json these files are where the pipeline will look for the models path, so you may have to manually edit the path to the models. By default, the paths to the models are the relative path starting from the folder storing the specific model it is paired with (e.g. brainmask/detailed-name-of-the-model.keras for the brain mask models used below, stored in a brainmask folder).
This way, if you simply extract the downloaded models in the common model folder, the model_info.json file should already be good.
Once you have your models and json files properly stored and setup, update the config file with the correct path for each "descriptor" file (i.e. the json files).
⚠️If the
model_info.jsonis not available for some reason, refer to the Create missing json file section, and don't forget to update the config file if you use one.
Brain masking
SHiVAi relies on the access to brain masks in order to crop the input volumes to the proper dimension for the aI models. More details are available below, but if you do not have such masks on hand, SHiVAi can create them automatically using a dedicated AI model. To use this option, you will have to download it and store (after extracting/unzipping) in the same folder as the other AI models. The brain masking model can be found following the link: https://cloud.efixia.com/sharing/Mn9LB5mIR
Fully contained process (Apptainer)
-
You will need to have Apptainer installed (previously known as Singularity): https://apptainer.org/docs/user/main/quick_start.html
-
Download the Apptainer image (.sif file) from https://cloud.efixia.com/sharing/bbWPx1QAZ (it may take a while, the image weighs about 4GB). Let's assume you saved it in
/myHome/myProject/shivai.sif -
From the Shivai repository (where you are reading this), navigate to the apptainer folder and download run_shiva.py and config_example.yml
-
You now need to prepare this
config_example.yml, it will hold diverse parameters as well as the path to the AI model and to the apptainer image. There, you should change the placeholder paths formodel_pathandapptainer_imagewith your own paths (e.g./myHome/myProject/Shiva_AI_modelsand/myHome/myProject/shivai.sif). You will also have to set the model descriptors (likePVS_descriptororWMH_descriptor) with the path to themodel_info_*.jsonfile mentioned above.
Other than the "descriptor" paths, you shouldn't have to modify any other setting in the
parameterspart.
For the rest of this readme, let's assume that you now have the config file prepared and saved as /myHome/myProject/myConfig.yml.
- Finally, set-up a minimal Python virtual environment with the
pyyamlpackage installed.
Next, see Running a contained SHiVAi
Fully contained process (Docker)
Shivai was mostlydevelopped to run with Apptainer, but we also provide Dockerfiles to use Docker as a containter solution. It will, however, requirement a little more work on your part to run the process.
-
Install Docker and be sure to have root access (needed to build and run the images)
-
After cloning or downloading the Shivai project, navigate to the Shivai repository (where you are reading this), i.e. the folder containing the Dockerfile, and run (replace
myIdby your username or something equivalent):
docker build --rm -t myId/shivai .
-
If you want to use the Synthseg parcelation system, you will also need a separate Docker image for Synthseg. To build it, follow the related section in the container-related readme.
-
To run Shivai from Docker, check the Docker section of the guide.
Traditional python install
This type of install is most
