SkillAgentSearch skills...

Proxylessnas

[ICLR 2019] ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware

Install / Use

/learn @mit-han-lab/Proxylessnas
About this skill

Quality Score

0/100

Supported Platforms

Universal

README

ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware [arXiv] [Poster]

@inproceedings{
  cai2018proxylessnas,
  title={Proxyless{NAS}: Direct Neural Architecture Search on Target Task and Hardware},
  author={Han Cai and Ligeng Zhu and Song Han},
  booktitle={International Conference on Learning Representations},
  year={2019},
  url={https://arxiv.org/pdf/1812.00332.pdf},
}

News

  • ProxylessNAS is integrated into PytorchHub.
  • ProxylessNAS is integrated into Microsoft NNI.
  • ProxylessNAS is integrated into Amazon AutoGluon.
  • First place in the Visual Wake Words Challenge, TF-lite track, @CVPR 2019
  • Third place in the Low Power Image Recognition Challenge (LPIRC), classification track, @CVPR 2019

Performance

Without any proxy, directly and efficiently search neural network architectures on your target task and hardware!

Now, proxylessnas is on PyTorch Hub. You can load it with only two lines!

target_platform = "proxyless_cpu" # proxyless_gpu, proxyless_mobile, proxyless_mobile14 are also avaliable.
model = torch.hub.load('mit-han-lab/ProxylessNAS', target_platform, pretrained=True)

<p align="center"> <img src="assets/proxyless_bar.png" width="80%" /> </p>

<table> <tr> <th> Mobile settings </th><th> GPU settings </th> </tr> <tr> <td> <img src="assets/proxyless_vs_mobilenet.png" width="100%" /> </td> <td>

| Model | Top-1 | Top-5 | Latency | |----------------------|----------|----------|---------| | MobilenetV2 | 72.0 | 91.0 | 6.1ms | | ShufflenetV2(1.5) | 72.6 | - | 7.3ms | | ResNet-34 | 73.3 | 91.4 | 8.0ms | | MNasNet(our impl) | 74.0 | 91.8 | 6.1ms | | ProxylessNAS (GPU) | 75.1 | 92.5 | 5.1ms |

</td> </tr> <tr> <th> ProxylessNAS(Mobile) consistently outperforms MobileNetV2 under various latency settings. </th> <th> ProxylessNAS(GPU) is 3.1% better than MobilenetV2 with 20% faster. </th> </tr> </td></tr> </table>

Specialization

People used to deploy one model to all platforms, but this is not good. To fully exploit the efficiency, we should specialize architectures for each platform.

We provide a visualization of search process. Please refer to our paper for more results.

How to use / evaluate

  • Use

    # pytorch 
    from proxyless_nas import proxyless_cpu, proxyless_gpu, proxyless_mobile, proxyless_mobile_14, proxyless_cifar
    net = proxyless_cpu(pretrained=True) # Yes, we provide pre-trained models!
    
    # tensorflow
    from proxyless_nas_tensorflow import proxyless_cpu, proxyless_gpu, proxyless_mobile, proxyless_mobile_14
    tf_net = proxyless_cpu(pretrained=True)
    

    If the above scripts failed to download, you download it manually from Google Drive and put them under $HOME/.torch/proxyless_nas/.

  • Evaluate

    python eval.py --path 'Your path to imagent' --arch proxyless_cpu # pytorch ImageNet

    python eval.py -d cifar10 # pytorch cifar10

    python eval_tf.py --path 'Your path to imagent' --arch proxyless_cpu # tensorflow

File structure

Projects with ProxylessNAS:

<div align=center> <img src="proxyless_gaze/assets/rpi4_demo.gif" style="width:100%"></img> </div>

Related work on automated model compression and acceleration:

Once for All: Train One Network and Specialize it for Efficient Deployment (ICLR'20, code)

ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware (ICLR’19)

AMC: AutoML for Model Compression and Acceleration on Mobile Devices (ECCV’18)

HAQ: Hardware-Aware Automated Quantization (CVPR’19, oral)

Defenstive Quantization: When Efficiency Meets Robustness (ICLR'19)

View on GitHub
GitHub Stars1.4k
CategoryProduct
Updated9d ago
Forks284

Languages

C++

Security Score

100/100

Audited on Mar 22, 2026

No findings