FairNAS: 小米开源全新神经架构搜索算法FairNAS

网友投稿 1010 2022-10-22

FairNAS: 小米开源全新神经架构搜索算法FairNAS

FairNAS: 小米开源全新神经架构搜索算法FairNAS

FairNAS: Rethinking Evaluation Fairness of Weight Sharing Neural Architecture Search

Introduction

One of the most critical problems in two-stage weight-sharing neural architecture search is the evaluation of candidate models. A faithful ranking certainly leads to accurate searching results. However, current methods are prone to making misjudgments. In this paper, we prove that they inevitably give biased evaluations due to inherent unfairness in the supernet training. In view of this, we propose two levels of constraints: expectation fairness and strict fairness. Particularly, strict fairness ensures equal optimization opportunities for all choice blocks throughout the training, which neither overestimates nor underestimates their capacity. We demonstrate this is crucial to improving confidence in models’ ranking (See Figure 1). Incorporating our supernet trained under fairness constraints with a multi-objective evolutionary search algorithm, we obtain various state-of-the-art models on ImageNet. Especially, FairNAS-A attains 77.5% top-1 accuracy.

Requirements

Python 3.6 +Pytorch 1.0.1 +

Good news! We Are Hiring (Full-time & Internship)!

Hi folks! We are AutoML Team from Xiaomi AI Lab, based in Beijing, China. There are few open positions, welcome applications from new graduates and professionals skilled in AutoML/NAS! Please send your resume to zhangbo11@xiaomi.com.

人工智能算法/软件工程师(含实习生)职位,简历请发送至 zhangbo11@xiaomi.com

Discuss with us! 欢迎交流

QQ 群名称:小米 AutoML 交流反馈群 号:702473319 (加群请填写“神经网络架构搜索”的英文简称)

Updates

Jul-3-2019: Model release of FairNAS-A, FairNAS-B, FairNAS-C.May-19-2020:Model release of FairNAS-A-SE, FairNAS-B-SE, FairNAS-C-SE and transfered models on CIFAR-10.

Performance Result

Preprocessing

We have reorganized all validation images of the ILSVRC2012 ImageNet by their classes.

Download ILSVRC2012 ImageNet dataset. Change to ILSVRC2012 directory and run the preprocessing script with ./preprocess_val_dataset.sh

Evaluate

To evaluate,

python3 verify.py --model [FairNAS_A|FairNAS_B|FairNAS_C] --device [cuda|cpu] --val-dataset-root [ILSVRC2012 root path] --pretrained-path [pretrained model path]

Validate Transferred Model Accuracy

python transfer_verify.py --model [fairnas_a|fairnas_b|fairnas_c] --model-path pretrained/fairnas_[a|b|c]_transfer.pt.tar --gpu_id 0 --se-ratio 1.0

Results:

FairNAS-A-SE-1.0: flops: 403.36264M, params: 5.835322M, top1: 98.3, top5: 99.99FairNAS-B-SE-1.0: flops: 370.921184M, params: 5.603242M top1: 98.08, top5: 99.99FairNAS-C-SE-1.0: flops: 345.228096M, params: 5.42953M top1: 98.01, top5: 99.99FairNAS-A-SE-0.5: flops: 414.305856M, params: 4.61373M top1: 98.15, top5: 99.98FairNAS-B-SE-0.5: flops: 358.330632M, params: 4.42485M, top1: 98.15, top5: 99.99FairNAS-C-SE-0.5: flops: 333.272088M, params: 4.283586M, top1: 97.99, top5: 99.99

Validate FairNAS-SE models

python verify_se.py --val-dataset-root [ILSVRC2012 root path] --device cuda --model [fairnas_a|fairnas_b|fairnas_c] --model-path pretrained/fairnas_[a|b|c]_se.pth.tar

Results:

FairNAS-A-SE: mTop1: 77.5480 mTop5: 93.674000FairNAS-B-SE: mTop1: 77.1900 mTop5: 93.494000FairNAS-C-SE: mTop1: 76.6700 mTop5: 93.258000FairNAS-A-SE-0.5: mTop1: 77.3960 mTop5: 93.650000FairNAS-B-SE-0.5: mTop1: 77.1060 mTop5: 93.528000FairNAS-C-SE-0.5: mTop1: 76.7600 mTop5: 93.318000

Citation

Your kind citations are welcomed!

@article{chu2019fairnas, title={FairNAS: Rethinking Evaluation Fairness of Weight Sharing Neural Architecture Search}, author={Chu, Xiangxiang and Zhang, Bo and Xu, Ruijun and Li, Jixiang}, journal={arXiv preprint arXiv:1907.01845}, url={https://arxiv.org/pdf/1907.01845.pdf}, year={2019}}

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:Mybatis拦截器实现数据权限的示例代码
下一篇:AQS同步组件-FutureTask解析和用例
相关文章

 发表评论

暂时没有评论,来抢沙发吧~