You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
PEMN explores the representative potential of random initialized parameters with limited unique values by learning diverse masks to deliver different feature mappings. We propose to use three parameter-efficient strategies: One-layer, Max-layer padding (MP), and Random vector padding (RP) to construct a random network based on a set of given random parameters, which is named as prototype. This exploration promises us a network can be efficiently represented as a small set of random values with a bunch of masks. Inspired by our exploration, we naturally propsoe a new paradigm for network compression for efficient network storage and transfer.
Run
To train Random Padding (RP) strategy with 1e-3 ratio on CIFAR10 using ConvMixer with 6-block and 256-dim:
Please cite this in your publication if our work helps your research. Should you have any questions, welcome to reach out to Yue Bai (bai.yue@northeastern.edu).
@article{bai2022parameter,
title={Parameter-Efficient Masking Networks},
author={Bai, Yue and Wang, Huan and Ma, Xu and Zhang, Yitian and Tao, Zhiqiang and Fu, Yun},
journal={arXiv preprint arXiv:2210.06699},
year={2022}
}