CoCoNet
[IJCV 2024] CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature Ensemble for Multi-modality Image Fusion
Install / Use
/learn @runjia0124/CoCoNetREADME
Implementation of our work:
Jinyuan Liu*, Runjia Lin*, Guanyao Wu, Risheng Liu, Zhongxuan Luo, and Xin Fan<sup>📭</sup>, "CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature Ensemble for Multi-modality Image Fusion", International Journal of Computer Vision (IJCV), 2024.
[Paper] [Arxiv]
Introduction

- Check out our recent related works 🆕:
-
🔥 ICCV'23 Oral: Multi-interactive Feature Learning and a Full-time Multi-modality Benchmark for Image Fusion and Segmentation [paper] [code]
-
🔥 CVPR'22 Oral: Target-aware Dual Adversarial Learning and a Multi-scenario Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [paper] [code]
-
🔥 IJCAI'23: Bi-level Dynamic Learning for Jointly Multi-modality Image Fusion and Beyond [paper] [code]
-
Installation
Clone repo:
git clone https://github.com/runjia0124/CoCoNet.git
cd CoCoNet
The code is tested with Python == 3.8, PyTorch == 1.9.0 and CUDA == 11.1 on NVIDIA GeForce RTX 2080, you may use a different version according to your GPU.
conda create -n coconet python=3.8
conda activate coconet
pip install -r requirements.txt
Quick Test
bash ./scripts/test.sh
or
python main.py \
--test --use_gpu \
--test_vis ./TNO/VIS \
--test_ir ./TNO/IR
To work with your own test set, make sure to use the same file names for each infrared-visible image pair if you prefer not to edit the code.
Training
Data
Get training data from [Google Drive]
Launch visdom
python -m visdom.server
Main stage training
python main.py --train --c1 0.5 --c2 0.75 --epoch 30 --bs 30 \
--logdir <checkpoint_path> --use_gpu
Finetuning with contrastive loss
python main.py --finetune --c1 0.5 --c2 0.75 --epoch 2 --bs 30 \
--logdir <checkpoint_path> --use_gpu
Results
Visual inspection

Down-stream task

Contact
If you have any questions about the code, please email us or open an issue,
Runjia Lin(linrunja@gmail.com) or Jinyuan Liu (atlantis918@hotmail.com).
Citation
If you find this paper/code helpful, please consider citing us:
@article{liu2023coconet,
title={Coconet: Coupled contrastive learning network with multi-level feature ensemble for multi-modality image fusion},
author={Liu, Jinyuan and Lin, Runjia and Wu, Guanyao and Liu, Risheng and Luo, Zhongxuan and Fan, Xin},
journal={International Journal of Computer Vision},
pages={1--28},
year={2023},
publisher={Springer}
}
Related Skills
proje
Interactive vocabulary learning platform with smart flashcards and spaced repetition for effective language acquisition.
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
groundhog
401Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
