Ai4elife
This data-centric AI repository implements a robust deep learning method (LFBNet) for fully automated tumor segmentation in whole-body [18]F-FDG PET/CT images.
Install / Use
/learn @KibromBerihu/Ai4elifeREADME
News!
Please refer to this link for the new 2D version, which has been trained on more data (~2K data), and the 3D version of the method. It provides both the 2D MIP and 3D segmentations.
AI4eLIFE: Artificial Intelligence for Efficient Learning-based Image Feature Extraction.
<a name="introduction"> </a> 📑 Fully automated tumor lesions segmentation in whole-body PET/CT images using data-centric artificial intelligence and fully automatically calculating the clinical endpoints.
Introduction: Baseline 18F-FDG PET/CT image-driven features have shown predictive values in Diffuse Large B-cell lymphoma (DLBCL) patients. Notably, total metabolic active tumor volume (TMTV) and tumor dissemination (Dmax) have shown predictive values to characterize tumor burden and dissemination. However, TMTV and Dmax calculation require tumor volume delineation over the whole-body 3D 18F-FDG PET/CT images, which is prone to observer-variability and complicates using these quantitative features in clinical routine. In this regard, we hypothesized that tumor burden and spread could be automatically evaluated from only two PET Maximum Intensity Projections (MIPs) images corresponding to coronal and sagittal views, thereby easy the calculation and validation of these features.
Here, we developed data-driven AI to calculate surrogate biomarkers for DLBCL patients automatically. Briefly, first, the (3D) 18F-FDG PET images were projected in the coronal and sagittal directions. The projected PET MIP images are then fed to an AI algorithm to segment lymphoma regions automatically. From the segmented images, the surrogate TMTV (sTMTV) and surrogate Dmax (sDmax) are calculated and evaluated in terms of predictions for overall survival (OS) and progression-free survival (PFS).
Figure 1: Flow diagram of the proposed data-centric AI to measure prognostic biomarkers automatically.
Results: Tested on an independent testing cohort (174 patients), the AI yielded a 0.86 median Dice score (IQR: 0.77-0.92), 87.9% (IQR: 74.9.0%-94.4%) sensitivity, and 99.7% (IQR: 99.4%-99.8%) specificity. The PET MIP AI-driven surrogate biomarkers (sTMTV) and sDmax were highly correlated to the 3D 18F-FDG PET-driven biomarkers (TMTV and Dmax) in both the training-validation cohort and the independent testing cohort. These PET MIP AI-driven features can be used to predict the OS and PFS in DLBCL patients, equivalent to the expert-driven 3D features.
Deep learning Model: We adapted the deep learning-based robust medical image segmentation method LFBNet. Please refer to the paper for details, and cite the paper if you use lfbnet for your research.
Integrated framework: The whole pipeline, including the generation of PET MIPs, automatic segmentation, and sTMTV and sDmax calculation, is developed for a use case on personal/desktop computers or clusters. It could highly facilitate the analysis of PET MIP-based features leading to the potential translation of these features into clinical practice.
Please refer to the paper for details and cite the paper if you use LFBNet for your research.
Table of contents
- Summary
- Table of Contents
- Required folder structure
- Installation
- Usage
- Results
- FAQ
- Citations
- Adapting LFBNet for other configurations or segmentation tasks
- Useful resources
- Acknowledgements
📁 Required folder structure
Please provide all data in a single directory. The method automatically analyses all given data batch-wise.
To run the program, you only need PET scans (CT is not required) of patients in nifty format, where the PET images are coded in SUV units. If your images have already been segmented, you can also provide the mask (ground truth (gt)) as a binary image in nifty format. Suppose you provided ground truth (gt) data; it will print the dice, sensitivity, and specificity metrics between the reference segmentation by the expert (i.e., gt) and the predicted segmentation by the model. If the ground truth is NOT AVAILABLE, the model will only predict the segmentation.
A typical data directory might look like:
|-- main_folder <-- The main folder or all patient folders (Give it any NAME)
| |-- parent folder (patient_folder_1) <-- Individual patient folder name with unique id
| |-- pet <-- The pet folder for the .nii suv file
| -- name.nii or name.nii.gz <-- The pet image in nifti format (Name can be anything)
| |-- gt <-- The corresponding ground truth folder for the .nii file
| -- name.nii or name.nii.gz <-- The ground truth (gt) image in nifti format (Name can be anything)
| |-- parent folder (patient_folder_2) <-- Individual patient folder name with unique id
| |-- gt <-- The pet folder for the .nii suv file
| -- name.nii or name.nii.gz <-- The pet image in nifti format (Name can be anything)
| |-- pet <-- The corresponding ground truth folder for the .nii file
| -- name.nii or name.nii.gz <-- The ground truth (gt) image in nifti format (Name can be anything)
| .
| .
| .
| |-- parent folder (patient_folder_N) <-- Individual patient folder name with unique id
| |-- pet <-- The pet folder for the .nii suv file
| -- name.nii or name.nii.gz <-- The pet image in nifti format (Name can be anything)
| |-- gt <-- The corresponding ground truth folder for the .nii file
| -- name.nii or name.nii.gz <-- The ground truth (gt) image in nifti format (Name can be anything)
Note: the folder name for PET images should be pet and for the ground truth gt. All other folder and sub-folder names could be anything.
⚙️ Installation <a name="installation"> </a>
Please read the documentation before opening an issue!
<font size='4'> Download/clone code to your local computer </font>
- git clone https://github.com/KibromBerihu/ai4elife.git
- Alternatively:
1. go to https://github.com/KibromBerihu/ai4elife.git >> [Code] >> Download ZIP file.
-
<font size ="4">To install in virtual environment </font>
-
We recommend you to create virtual environment. please refer to THIS regarding how to create a virtual environment using conda.
-
Open terminal or Anaconda Prompt <br/><br>
-
Change the working directory to the downloaded and unzipped ai4elife folder <br/><br>
-
Create the virtual environment provided in the requirements.yaml:
conda env create -f environment.yml<br/><br> -
If you choose to use a virtual environment, the virtual environment must be activated before executing any script:
conda activate myenv<br/><br> -
Verify the virtual environment was installed correctly:
conda info --envs<font size='2'> If you can see the virtual environment with a name 'myenv', well done, the virtual environment and dependencies are installed successfully. </font>
-
-
<font size ="4"> Using docker image: building image from docker file [REPRODUCIBLE] </font> <br/><br>
-
Assuming you already have docker desktop installed. For more information, kindly refer to THIS.
-
Make sure to change the directory to the downloaded and unzipped ai4elife directory. <br/><br>
-
Run the following commands to create a docker image with the name <DockerImageName>:<Tag>' <br/><br>
docker build -t <DockerImageName>:<Tag> .
-
💻 Usage
This package has two usages. The first one is to segment tumor regions and then calculate the surrogate biomarkers such as sTMTV and sDmax on the given test dataset using the pre-trained weights, named as "easy use case". The second use case is transfer learning or retraining from scratch on your own dataset.
Easy use: testing mode <a name="easy-use-testing-mode"> </a>
Please make sure that you organized your data as in the Required folder structure.
For reproducibility and better accuracy, please use OPTION 2.
- Option 1: Using the virtual environment: <br/><br>
- Change to the source directory:
cd path/to/ai4elife/<br/><br> - Activate th
- Change to the source directory:
Related Skills
feishu-drive
335.2k|
things-mac
335.2kManage Things 3 via the `things` CLI on macOS (add/update projects+todos via URL scheme; read/search/list from the local Things database)
clawhub
335.2kUse the ClawHub CLI to search, install, update, and publish agent skills from clawhub.com
SchoolAnalytics
Skill: IB MYP Analytics & Grading Activation Trigger - Any task involving grade calculations, student flagging, or analytics dashboarding. - Questions about Criteria A, B, C, or D. Knowledge
