Code for paper:
Does Unsupervised Architecture Representation Learning Help Neural Architecture Search?
Shen Yan, Yu Zheng, Wei Ao, Xiao Zeng, Mi Zhang.
NeurIPS 2020.
Top: The supervision signal for representation learning comes from the accuracies of architectures selected by the search strategies. Bottom (ours): Disentangling architecture representation learning and architecture search through unsupervised pre-training.
The repository is built upon pytorch_geometric, pybnn, nas_benchmarks, bananas.
- NVIDIA GPU, Linux, Python3
pip install -r requirements.txtInstall nasbench and download nasbench_only108.tfrecord under ./data folder.
python preprocessing/gen_json.pyData will be saved in ./data/data.json.
bash models/pretraining_nasbench101.shThe pretrained model will be saved in ./pretrained/dim-16/.
bash run_scripts/extract_arch2vec.shThe extracted arch2vec will be saved in ./pretrained/dim-16/.
Alternatively, you can download the pretrained arch2vec on NAS-Bench-101.
bash run_scripts/run_reinforce_supervised.sh
bash run_scripts/run_reinforce_arch2vec.sh Search results will be saved in ./saved_logs/rl/dim16
Generate json file:
python plot_scripts/plot_reinforce_search_arch2vec.py bash run_scripts/run_dngo_supervised.sh
bash run_scripts/run_dngo_arch2vec.sh Search results will be saved in ./saved_logs/bo/dim16.
Generate json file:
python plot_scripts/plot_dngo_search_arch2vec.pypython plot_scipts/plot_nasbench101_comparison.pyDownload the search results from search_logs.
python plot_scripts/plot_cdf.pyDownload the NAS-Bench-201-v1_0-e61699.pth under ./data folder.
python preprocessing/nasbench201_json.pyData corresponding to the three datasets in NAS-Bench-201 will be saved in folder ./data/ as cifar10_valid_converged.json, cifar100.json, ImageNet16_120.json.
bash models/pretraining_nasbench201.shThe pretrained model will be saved in ./pretrained/dim-16/.
Note that the pretrained model is shared across the 3 datasets in NAS-Bench-201.
bash run_scripts/extract_arch2vec_nasbench201.shThe extracted arch2vec will be saved in ./pretrained/dim-16/ as cifar10_valid_converged-arch2vec.pt, cifar100-arch2vec.pt and ImageNet16_120-arch2vec.pt.
Alternatively, you can download the pretrained arch2vec on NAS-Bench-201.
CIFAR-10: ./run_scripts/run_reinforce_arch2vec_nasbench201_cifar10_valid.sh
CIFAR-100: ./run_scripts/run_reinforce_arch2vec_nasbench201_cifar100.sh
ImageNet-16-120: ./run_scripts/run_reinforce_arch2vec_nasbench201_ImageNet.shCIFAR-10: ./run_scripts/run_bo_arch2vec_nasbench201_cifar10_valid.sh
CIFAR-100: ./run_scripts/run_bo_arch2vec_nasbench201_cifar100.sh
ImageNet-16-120: ./run_scripts/run_bo_arch2vec_nasbench201_ImageNet.shpython ./plot_scripts/summarize_nasbench201.pyThe corresponding table will be printed to the console.
CIFAR-10 can be automatically downloaded by torchvision, ImageNet needs to be manually downloaded (preferably to a SSD) from https://image-net.org/download.
python preprocessing/gen_isomorphism_graphs.pyData will be saved in ./data/data_darts_counter600000.json.
Alternatively, you can download the extracted data_darts_counter600000.json.
bash models/pretraining_darts.shThe pretrained model is saved in ./pretrained/dim-16/.
bash run_scripts/extract_arch2vec_darts.shThe extracted arch2vec will be saved in ./pretrained/dim-16/arch2vec-darts.pt.
Alternatively, you can download the pretrained arch2vec on DARTS search space.
bash run_scripts/run_reinforce_arch2vec_darts.shlogs will be saved in ./darts-rl/.
Final search result will be saved in ./saved_logs/rl/dim16.
bash run_scripts/run_bo_arch2vec_darts.shlogs will be saved in ./darts-bo/ .
Final search result will be saved in ./saved_logs/bo/dim16.
python darts/cnn/train.py --auxiliary --cutout --arch arch2vec_rl --seed 1
python darts/cnn/train.py --auxiliary --cutout --arch arch2vec_bo --seed 1- Expected results (RL): 2.60% test error with 3.3M model params.
- Expected results (BO): 2.48% test error with 3.6M model params.
python darts/cnn/train_imagenet.py --arch arch2vec_rl --seed 1
python darts/cnn/train_imagenet.py --arch arch2vec_bo --seed 1- Expected results (RL): 25.8% test error with 4.8M model params and 533M mult-adds.
- Expected results (RL): 25.5% test error with 5.2M model params and 580M mult-adds.
python darts/cnn/visualize.py arch2vec_rl
python darts/cnn/visualize.py arch2vec_boDownload pretrained supervised embeddings of nasbench101 and nasbench201.
bash plot_scripts/drawfig5-nas101.sh # visualization on nasbench-101
bash plot_scripts/drawfig5-nas201.sh # visualization on nasbench-201
bash plot_scripts/drawfig5-darts.sh # visualization on dartsThe plots will be saved in ./graphvisualization.
Install nas_benchmarks and download nasbench_full.tfrecord under the same directory.
python plot_scripts/distance_comparison_fig3.pybash plot_scripts/drawfig4.shthe plots will be saved in ./density.
Download predicted_accuracy under saved_logs/.
python plot_scripts/pearson_plot_fig2.pyIf you find this useful for your work, please consider citing:
@InProceedings{yan2020arch,
title = {Does Unsupervised Architecture Representation Learning Help Neural Architecture Search?},
author = {Yan, Shen and Zheng, Yu and Ao, Wei and Zeng, Xiao and Zhang, Mi},
booktitle = {NeurIPS},
year = {2020}
}