You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We propose a generalization of transformer neural network architecture for arbitrary graphs: Graph Transformer. Compared to the Standard Transformer, the highlights of the presented architecture are:
The attention mechanism is a function of neighborhood connectivity for each node in the graph.
The position encoding is represented by Laplacian eigenvectors, which naturally generalize the sinusoidal positional encodings often used in NLP.
The layer normalization is replaced by a batch normalization layer.
The architecture is extended to have edge representation, which can be critical to tasks with rich information on the edges, or pairwise interactions (such as bond types in molecules, or relationship type in KGs. etc).
Figure: Block Diagram of Graph Transformer Architecture
@article{dwivedi2021generalization,
title={A Generalization of Transformer Networks to Graphs},
author={Dwivedi, Vijay Prakash and Bresson, Xavier},
journal={AAAI Workshop on Deep Learning on Graphs: Methods and Applications},
year={2021}
}
About
Graph Transformer Architecture. Source code for "A Generalization of Transformer Networks to Graphs", DLG-AAAI'21.