QSMnet
Source code for deep neural network trained quantitative susceptibility mapping (QSMnet and QSMnet+)
Install / Use
/learn @SNU-LIST/QSMnetREADME
QSMnet & QSMnet<sup>+</sup>
- The code is for reconstructing Quantitative Susceptibility Mapping by deep neural network (QSMnet) and QSMnet<sup>+</sup>. QSMnet<sup>+</sup> covers a wider range of susceptibility than QSMnet, using data augmentation approach. Data preprocessing, training and inference code of QSMnet (.py) are availabe.
- The source data for training can be shared to academic institutions. Request should be sent to snu.list.software@gmail.com. For each request, internal approval from our Institutional Review Board (IRB) is required (i.e. takes time).
- Last update : 2022.06.09
References
- QSMnet </br> J. Yoon, E. Gong, I. Chatnuntawech, B. Bilgic, J. Lee, W. Jung, J. Ko, H. Jung, K. Setsompop, G. Zaharchuk, E.Y. Kim, J. Pauly, J. Lee. Quantitative susceptibility mapping using deep neural network: QSMnet. Neuroimage. 2018 Oct;179:199-206. https://www.sciencedirect.com/science/article/pii/S1053811918305378
- QSMnet+ </br> W. Jung, J. Yoon, S. Ji, J. Choi, J. Kim, Y. Nam, E. Kim, J. Lee. Exploring linearity of deep neural network trained QSM: QSMnet+. Neuroimage. 2020 May; 116619. https://www.sciencedirect.com/science/article/pii/S1053811920301063</br>
- Review of deep learning QSM </br> W. Jung, S. Bollmann, J. Lee. Overview of quantitative susceptibility mapping using deep learning: Current status, challenges and opportunities. NMR in Biomedicine. 2020 Mar; e4292. https://doi.org/10.1002/nbm.4292
Overview
(1) QSMnet

(2) QSMnet<sup>+</sup>

Requirements
-
Python 3.7
-
Tensorflow 1.14.0
-
NVIDIA GPU (CUDA 10.0)
-
MATLAB R2015b
Data acquisition
- Training data of QSMnet was acquired at 3T MRI (SIEMENS).
- 3D single-echo GRE scan with following sequence parameters: FOV = 256 x 224 x 176 mm<sup>3</sup>, voxel size = 1 x 1 x 1 x mm<sup>3</sup>, TR = 33 ms, TE = 25 ms, bandwidth = 100 Hz/pixel, flip angle = 15°.
Manual
First Time Only
- Clone this repository
git clone https://github.com/SNU-LIST/QSMnet.git
- Download network </br> In Checkpoints directory,
- For Linux User,
sh download_network.sh
- For Windows User, </br> https://drive.google.com/drive/folders/1E7e9thvF5Zu68Sr9Mg3DBi-o4UdhWj-8 </br> and unzip the files in 'Checkpoints/' </br>
- Create conda environment
conda create --name qsmnet -c conda-forge -c anaconda --file requirements.txt
Phase processing
-
Requirements
- FSL (Ref: S.M. Smith. Fast robust automated brain extraction. Human brain mapping. 2002 Sep;17(3):143-155.)
- STI Suite (Ref: W. Li, A.V. Avram, B. Wu, X. Xiao, C. Liu. Integrated Laplacian‐based phase unwrapping and background phase removal for quantitative susceptibility mapping. NMR in Biomedicine. 2014 Dec;27(2):219-227.)
-
Process flow
- Extract magnitude and phase image from DICOMs
- Brain extraction : BET (FSL)
- Phase unwrapping : Laplaican phase unwrapping (STI Suite)
- Background field removal : 3D V-SHARP (STI Suite)
-
Usage:
-
If you acquired data with different resolution from 1 x 1 x 1 mm<sup>3</sup>,</br> you need to interpolate the data into 1 mm isotropic resolution before phase processing.</br> (e.g. zero-padding or truncating in Fourier domain)
-
After 3D V-SHARP in MATLAB, run 'save_input_data_for_QSMnet.m</br>'. 'test_input{sub_num}.mat' and 'test_mask{sub_num}.mat' files will be saved in 'Data/Test/Input/'.
save_input_data_for_QSMnet(TissuePhase, Mask, TE, B0, sub_num) % TissuePhase : Results of 3D V-SHARP % Mask : Results of 3D V-SHARP % TE (s) % B0 (T) % sub_num : subject number % Convert unit from Hz to ppm : field / (Sum(TE) * B0 * gyro) [ppm]- Save data with the same orientation and polarity as val_input.mat, val_mask.mat, and val_label.mat files in 'Data/Train/' folder. <img src="https://user-images.githubusercontent.com/29892433/64081330-5f2b9600-cd3a-11e9-9ff2-20e1e0ef2996.jpg" width="50%" height="50%">
-
Training process
-
Activate qsmnet conda environment
conda activate qsmnet -
Usage
- Before training, local field & susceptibility maps need to be dividied into 64 x 64 x 64 in Matlab
python training_data_patch.py # PS : Patch size # net_name : Network name # sub_num : Number of subject to train # dir_num : Number of direction per subject # patch_num : Number of patches in [x, y, z]- Training process in python
python train.py
Inference
- Activate qsmnet conda environment
conda activate qsmnet - Usage
python inference.py- 'subject#<network_name>-epochs.mat' & 'subject#<network_name>-epochs.nii' will be saved after QSMnet reconstruction.
Related Skills
node-connect
352.0kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
111.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
352.0kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
352.0kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
