You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
PreNAS: Preferred One-Shot Learning Towards Efficient Neural Architecture Search
PreNAS is a novel learning paradigm that integrates one-shot and zero-shot NAS techniques to enhance search efficiency and training effectiveness.
This search-free approach outperforms current state-of-the-art one-shot NAS methods for both Vision Transformer and convolutional architectures,
as confirmed by its superior performance when the code is released.
Wang H, Ge C, Chen H and Sun X. PreNAS: Preferred One-Shot Learning Towards Efficient Neural Architecture Search. ICML 2023.
Previous one-shot NAS samples all architectures in the search space when one-shot training of the supernet for better evaluation in evolution search.
Instead, PreNAS first searches the target architectures via a zero-cost proxy and next applies preferred one-shot training to supernet.
PreNAS improves the Pareto Frontier benefited from the preferred one-shot learning and is search-free after training by offering the models with the
advance selected architectures from the zero-cost search.
Environment Setup
To set up the environment you can easily run the following command:
If PreNAS is useful for you, please consider to cite it. Thank you! :)
@InProceedings{PreNAS,
title = {PreNAS: Preferred One-Shot Learning Towards Efficient Neural Architecture Search},
author = {Wang, Haibin and Ge, Ce and Chen, Hesen and Sun, Xiuyu},
booktitle = {International Conference on Machine Learning (ICML)},
month = {July},
year = {2023}
}