GraphEcho
ICCV 2023, "GraphEcho: Graph-Driven Unsupervised Domain Adaptation for Echocardiogram Video Segmentation"
Install / Use
/learn @xmed-lab/GraphEchoREADME
:hammer: PostScript
:smile: This project is the pytorch implemention of [paper];
:laughing: Our experimental platform is configured with <u>One RTX3090 (cuda>=11.0)</u>;
:blush: Currently, this code is avaliable for public dataset <u>CAMUS and EchoNet</u>;
:smiley: For codes and accessment that related to dataset CardiacUDA;
:eyes: The code is now available at:
..\datasets\cardiac_uda.py
:heart_eyes: For codes and accessment that related to dataset CardiacUDA
:eyes: Please follw the link to access our dataset:
:computer: Installation
-
You need to build the relevant environment first, please refer to : requirements.yaml
-
Install Environment:
conda env create -f requirements.yaml
- We recommend you to use Anaconda to establish an independent virtual environment, and python > = 3.8.3;
:blue_book: Data Preparation
1. EchoNet & CAMUS
-
This project provides the use case of echocardiogram video segmentation task;
-
The hyper parameters setting of the dataset can be found in the train.py, where you could do the parameters modification;
-
For different tasks, the composition of data sets have significant different, so there is no repetition in this file;
1.1. Download The CAMUS.
:speech_balloon: The detail of CAMUS, please refer to: https://www.creatis.insa-lyon.fr/Challenge/camus/index.html/.
-
Download & Unzip the dataset.
The CAMUS dataset is composed as: /testing & /training.
-
The source code of loading the CAMUS dataset exist in path :
..\datasets\camus.py and modify the dataset path in ..\train_camus_echo.pyNew Version : We have updated the infos.npy in our new released code
1.2. Download The EchoNet.
:speech_balloon: The detail of EchoNet, please refer to: https://echonet.github.io/dynamic/.
-
Download & Unzip the dataset.
- The EchoNet dataset is consist of: /Video, FileList.csv & VolumeTracings.csv.
-
The source code of loading the Echonet dataset exist in path :
..\datasets\echo.py and modify the dataset path in ..\train_camus_echo.py
-
2. CardiacUDA
- Please access the dataset through : XiaoweiXu's Github
- Follw the instruction and download.
- Finish dataset download and unzip the datasets.
- Modify your code in both:
..\datasets\cardiac_uda.py and modify the infos and dataset path in ..\train_cardiac_uda.py # The layer of the infos dict should be : # dict{ # center_name: { # file: { # views_images: {image_path}, # views_labels: {label_path},}}}
:feet: Training
-
In this framework, after the parameters are configured in the file train_cardiac_uda.py and train_camus_echo.py, you only need to use the command:
python train_cardiac_uda.pyAnd
python train_camus_echo.py -
You are also able to start distributed training.
- Note: Please set the number of graphics cards you need and their id in parameter "enable_GPUs_id".
:rocket: Code Reference
- https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/vig_pytorch
- https://github.com/chengchunhsu/EveryPixelMatters
:rocket: Updates Ver 1.0(PyTorch)
:rocket: Project Created by Jiewen Yang : jyangcu@connect.ust.hk
Related Skills
docs-writer
99.2k`docs-writer` skill instructions As an expert technical writer and editor for the Gemini CLI project, you produce accurate, clear, and consistent documentation. When asked to write, edit, or revie
model-usage
337.3kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
arscontexta
2.9kClaude Code plugin that generates individualized knowledge systems from conversation. You describe how you think and work, have a conversation and get a complete second brain as markdown files you own.
zola-ai
An autonomous Solana wallet agent that executes payments via Twitter mentions and an in-app dashboard, powered by Claude.
