You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Geometric Graph Representation Learning via Maximizing Rate Reduction
Implementaion for WWW2022 paper Geometric Graph Representation Learning via Maximizing Rate Reduction [proceedings][arxiv].
1. Introduction
We propose Geometric Graph Representation Learning ($\mathrm{G}^2\mathrm{R}$) to learn node representations in an unsupervised manner via maximizing rate reduction. In this way, $\mathrm{G}^2\mathrm{R}$ maps nodes in distinct groups (implicitly stored in the adjacency matrix) into different subspaces, while each subspace is compact and different subspaces are dispersedly distributed. $\mathrm{G}^2\mathrm{R}$ adopts a graph neural network as the encoder and maximizes the rate reduction with the adjacency matrix. Furthermore, we theoretically and empirically demonstrate that rate reduction maximization is equivalent to maximizing the principal angles between different subspaces.
2. Examples
We provide a Jupyter Notebook to reproduce the orthogonality visualization results (Figure 3) in Section 5.3, which shows the (nearly) orthogonality of
node representations in the two classes.
We also provide the trianing log on Cora dataset in the Jupyter Notebook.
3. Run
Requirements
torch==1.7.1
torch_geometric==1.6.3
Data
We use the dataset built in torch_geometric. The dataset will be downloaded automatically.
Run Experiments
run the following examples on Cora, CiteSeer, PubMed dataset.
If you use this code in your research, please cite our paper.
@inproceedings{han2022geometric,
title={Geometric Graph Representation Learning via Maximizing Rate Reduction},
author={Han, Xiaotian and Jiang, Zhimeng and Liu, Ninghao and Song, Qingquan and Li, Jundong and Hu, Xia},
booktitle={Proceedings of the ACM Web Conference 2022},
pages={1226--1237},
year={2022}
}
About
[WWW2022] Geometric Graph Representation Learning via Maximizing Rate Reduction