MTLNAS
[CVPR 2020] MTL-NAS: Task-Agnostic Neural Architecture Search towards General-Purpose Multi-Task Learning
Install / Use
/learn @bhpfelix/MTLNASREADME
MTL-NAS: Task-Agnostic Neural Architecture Search towards General-Purpose Multi-Task Learning
Official PyTorch Implementation of MTL-NAS
Please refer to our paper for more technical details:
Yuan Gao*, Haoping Bai*, Zequn Jie, Jiayi Ma, Kui Jia, Wei Liu. MTL-NAS: Task-Agnostic Neural Architecture Search towards General-Purpose Multi-Task Learning, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. [arXiv]

If this code is helpful to your research, please consider citing our paper by:
@inproceedings{mtlnas2020,
title={MTL-NAS: Task-Agnostic Neural Architecture Search towards General-Purpose Multi-Task Learning},
author={Yuan Gao and Haoping Bai and Zequn Jie and Jiayi Ma and Kui Jia and Wei Liu},
year={2020},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}
}
Setup
Install the necessary dependencies:
$ pip install -r requirements.txt
Dataset
Follow the instruction here to prepare the dataset. Alternatively, download the preprocessed dataset here.
Download the converted PyTorch models from here, then create a weights directory and unzip the models inside.
When you are all set, you should have the following file structure:
datasets/nyu_v2/list
datasets/nyu_v2/nyu_v2_mean.npy
datasets/nyu_v2/nyu_train_val
weights/vgg_deeplab_lfov/tf_deeplab.pth
weights/nyu_v2/tf_finetune_seg.pth
weights/nyu_v2/tf_finetune_normal.pth
Training
All the arguments to train/eval MTLNAS are shown in core/config/defaults.py. The configuration files for different experiments are also provided in the configs directory. To run the NDDR-CNN baseline with VGG-16 architecture, simply call:
$ CUDA_VISIBLE_DEVICES=0 python tools/train.py --config-file configs/vgg/vgg_nyuv2_nddr.yaml
To run MTLNAS training with default configuration, call:
$ CUDA_VISIBLE_DEVICES=0 python tools/train_nas.py --config-file configs/ablation/vgg_nyuv2_default.yaml
Evaluation
To evaluate the final checkpoint for the NDDR-CNN baseline experiment, call:
$ CUDA_VISIBLE_DEVICES=0 python tools/eval.py --config-file configs/vgg/vgg_nyuv2_nddr.yaml
To evaluate the final checkpoint for default MTLNAS, call:
$ CUDA_VISIBLE_DEVICES=0 python tools/eval_nas.py --config-file configs/ablation/vgg_nyuv2_default.yaml
You can download and extract the final checkpoint for default MTLNAS to ckpts directory and evaluate it by running the command above.
Related Skills
proje
Interactive vocabulary learning platform with smart flashcards and spaced repetition for effective language acquisition.
groundhog
398Groundhog's primary purpose is to teach people how Cursor and all these other coding agents work under the hood. If you understand how these coding assistants work from first principles, then you can drive these tools harder (or perhaps make your own!).
last30days-skill
17.5kAI agent skill that researches any topic across Reddit, X, YouTube, HN, Polymarket, and the web - then synthesizes a grounded summary
sec-edgar-agentkit
10AI agent toolkit for accessing and analyzing SEC EDGAR filing data. Build intelligent agents with LangChain, MCP-use, Gradio, Dify, and smolagents to analyze financial statements, insider trading, and company filings.
