CLIC
CLIC (CLassification of Impacted Canines), an artificial intelligence (AI)-driven tool leveraging Mask R-CNN (Mask Region-based Convolutional Neural Network)
Install / Use
/learn @DCBIA-OrthoLab/CLICREADME
CLI-C Module
Overview
CLI-C (Classification and Localization of Impacted Canines) is a 3D Slicer extension designed for automated segmentation of dental Cone-Beam Computed Tomography (CBCT) images using a deep learning Mask R-CNN model. It streamlines dental segmentation workflows for intra-osseous teeth, facilitating accurate identification and visualization of impacted canines for orthodontic and dental surgical planning.
Key Features
- Automatic Segmentation: Precisely segments and classify impacted teeth, determininng the intra-osseous localization including buccal, bicortical, and palatal regions from CBCT scans.
- Mask R-CNN Model Integration: Utilizes advanced deep learning architectures to provide accurate and robust segmentation.
- Progress Monitoring: Real-time progress bar and log updates during segmentation tasks.
- User-Friendly Interface: Simple GUI integrated seamlessly within 3D Slicer.
- Batch Processing: Supports segmentation of single files or batch processing of entire directories of CBCT scans.
Installation
Prerequisites
- 3D Slicer 5.6+
- Python packages:
torch,torchvision,nibabel,numpy,scipy,requests
The module checks and automatically installs these dependencies upon first usage.
Setup
Load the Module in 3D Slicer
- Open 3D Slicer.
- Navigate to
Module → Slicer Automated Dental Tools → CLIC.
Usage
Step 1: Load CBCT Data
- Use the file dialog to select your CBCT image or directory containing multiple CBCT images.
- Available sample data for testing: MN138.nii , UM06.nii
Step 2: Download/Select Model
- Click "Download Model" if using for the first time, or specify your existing model directory.
Step 3: Configure your output
- Click "Choose save folder" if you want to a specific output folder or clic on save in input folder if you want the outputs in the same folder than the input
- write the suffix you want for th outputs
Step 3: Run Segmentation
- Click Predict to begin segmentation, if it s the first time you run a pop up will maybe ask you to install some dependency, clic on 'yes', the process will take a little time to start.
- Observe the progress bar and log output for real-time feedback.
Step 4: Visualize Results
- Segmentations automatically load into Slicer's viewer with clear labeling.
- A color-coded legend (Buccal, Bicortical, Palatal) appears in slice views.
- Be carefull, if you have scans with differents field of view in your input folder and you try to navigate through the scans when th process is end the views can turn black, in this case you have to clic on the button reset field of view.
Contributing
We welcome contributions! Please open issues or pull requests for enhancements and bug fixes.
Acknowledgments
@misc{matterport_maskrcnn_2017, title={Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow}, author={Waleed Abdulla}, year={2017}, publisher={Github}, journal={GitHub repository}, howpublished={\url{https://github.com/matterport/Mask_RCNN}}, }
Thanks to the 3D Slicer community and open-source developers contributing to medical imaging software.
Related Skills
node-connect
348.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
108.9kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
348.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
348.2kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
