You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To perform hyperparameter tuning, make use of wandb:
In configs/ folder, choose the yaml file corresponding to the dataset and setting (deterministic vs sampling) of interest, say <config-name>. This file contains the hyperparameters grid.
Run
wandb sweep configs/<config-name>
to obtain a sweep id <sweep-id>
Run the hyperparameter tuning with
wandb agent <sweep-id>
You can run the above command multiple times on each machine you would like to contribute to the grid-search
Open your project in your wandb account on the browser to see the results:
For the TUDatasets, the CSL and the EXP/CEXP datasets, refer to Metric/valid_mean and Metric/valid_std to obtain the results.
For the ogbg datasets and the ZINC dataset, compute mean and std of Metric/train_mean, Metric/valid_mean, Metric/test_mean over the different seeds of the same configuration.
Then, take the results corresponding to the configuration obtaining the best validation metric.
Credits
For attribution in academic contexts, please cite
@inproceedings{bevilacqua2022equivariant,
title={Equivariant Subgraph Aggregation Networks},
author={Beatrice Bevilacqua and Fabrizio Frasca and Derek Lim and Balasubramaniam Srinivasan and Chen Cai and Gopinath Balamurugan and Michael M. Bronstein and Haggai Maron},
booktitle={International Conference on Learning Representations},
year={2022},
}