You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you find the repo useful, please kindly cite our paper:
@inproceedings{he2018amc,
title={AMC: AutoML for Model Compression and Acceleration on Mobile Devices},
author={He, Yihui and Lin, Ji and Liu, Zhijian and Wang, Hanrui and Li, Li-Jia and Han, Song},
booktitle={European Conference on Computer Vision (ECCV)},
year={2018}
}
Other papers related to automated model design:
HAQ: Hardware-Aware Automated Quantization with Mixed Precision (CVPR 2019)
ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware (ICLR 2019)
Training AMC
Current code base supports the automated pruning of MobileNet on ImageNet. The pruning of MobileNet consists of 3 steps: 1. strategy search; 2. export the pruned weights; 3. fine-tune from pruned weights.
To conduct the full pruning procedure, follow the instructions below (results might vary a little from the paper due to different random seed):
Strategy Search
To search the strategy on MobileNet ImageNet model, first get the pretrained MobileNet checkpoint on ImageNet by running:
bash ./checkpoints/download.sh
It will also download our 50% FLOPs compressed model. Then run the following script to search under 50% FLOPs constraint:
bash ./scripts/search_mobilenet_0.5flops.sh
Results may differ due to different random seed. The strategy we found and reported in the paper is:
After searching, we need to export the pruned weights by running:
bash ./scripts/export_mobilenet_0.5flops.sh
Also we need to modify MobileNet file to support the new pruned model (here it is already done in models/mobilenet.py)
Fine-tune from Pruned Weightsa
After exporting, we need to fine-tune from the pruned weights. For example, we can fine-tune using cosine learning rate for 150 epochs by running:
bash ./scripts/finetune_mobilenet_0.5flops.sh
AMC Compressed Model
We also provide the models and weights compressed by our AMC method. We provide compressed MobileNet-V1 and MobileNet-V2 in both PyTorch and TensorFlow format here.
Detailed statistics are as follows:
Models
Top1 Acc (%)
Top5 Acc (%)
MobileNetV1-width*0.75
68.4
88.2
MobileNetV1-50%FLOPs
70.494
89.306
MobileNetV1-50%Time
70.200
89.430
MobileNetV2-width*0.75
69.8
89.6
MobileNetV2-70%FLOPs
70.854
89.914
Dependencies
Current code base is tested under following environment: