MMFT
(TIP 2022) Joint Learning of Salient Object Detection, Depth Estimation and Contour Extraction
Install / Use
/learn @Xiaoqi-Zhao-DLUT/MMFTREADME
MMFT
<br /> <p align="center"> <img src="./image/logo-1.png" alt="Logo" width="150" height="auto"> <p align="center"> <h1 align="center">Joint Learning of Salient Object Detection, Depth Estimation and Contour Extraction</h1> <p align="center"> IEEE TIP, 2022 <br /> <a href="https://xiaoqi-zhao-dlut.github.io/"><strong>Xiaoqi Zhao</strong></a> · <a href="https://lartpang.github.io/"><strong>Youwei Pang</strong></a> · <a href="https://scholar.google.com/citations?hl=zh-CN&user=XGPdQbIAAAAJ"><strong>Lihe Zhang</strong></a> · <a href="https://scholar.google.com/citations?hl=zh-CN&user=D3nE0agAAAAJ"><strong>Huchuan Lu</strong></a> </p> <p align="center"> <a href='https://arxiv.org/pdf/2203.04895v2'> <img src='https://img.shields.io/badge/Paper-PDF-green?style=flat&logo=arXiv&logoColor=green' alt='arXiv PDF'> </a> </p> <br />Motivation - Our High-quality Depth Prediction vs. Previous Low-quality Depth Inputs
<p align="center"> <img src="./image/depth_rgbd_sod.png"/> <br /> </p>Motivation - Depth-free Networks
<p align="center"> <img src="./image/depth-free.png"/> <br /> </p>Pipeline - Multi-task Learning Framework (Depth, Saliency, Contour)
<p align="center"> <img src="./image/pipeline.png"/> <br /> </p>Potential - Predicted Depth Maps on RGB SOD datasets
<p align="center"> <img src="./image/depth_rgb_sod.png"/> <br /> </p>Potential - Helping Existing Depth-based Methods to Obtain Additional Gains
<p align="center"> <img src="./image/depth_gain.png"/> <br /> </p>Datasets
Trained Models
- MMFT_RES101_duts_njud_nlpr_jointT GitHub Release
- MMFT_RES101_finetune_njud_nlpr GitHub Release
- MMFT_RES250_finetune_njud_nlpr GitHub Release
- MMFT_RES50_duts_njud_nlpr_jointT GitHub Release
Prediction Maps
- Depth_prediction GitHub Release
- Saliency_prediction GitHub Release
Evaluation Tools
- https://github.com/Xiaoqi-Zhao-DLUT/PySegMetric_EvalToolkit
- https://github.com/Xiaoqi-Zhao-DLUT/MMFT/blob/main/Depth_eva.py
Citation
If you think MMFT codebase are useful for your research, please consider referring us:
@article{MMFT,
title={Joint learning of salient object detection, depth estimation and contour extraction},
author={Zhao, Xiaoqi and Pang, Youwei and Zhang, Lihe and Lu, Huchuan},
journal={IEEE Transactions on Image Processing},
volume={31},
pages={7350--7362},
year={2022}
}
Related Skills
YC-Killer
2.7kA library of enterprise-grade AI agents designed to democratize artificial intelligence and provide free, open-source alternatives to overvalued Y Combinator startups. If you are excited about democratizing AI access & AI agents, please star ⭐️ this repository and use the link in the readme to join our open source AI research team.
best-practices-researcher
The most comprehensive Claude Code skills registry | Web Search: https://skills-registry-web.vercel.app
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
isf-agent
a repo for an agent that helps researchers apply for isf funding
