gammagl.layers¶
Convolutional Layers¶
Base class for creating message passing layers of the form |
|
The graph convolutional operator from the "Semi-supervised Classification with Graph Convolutional Networks" paper |
|
The graph attentional operator from the "Graph Attention Networks" paper |
|
The simple graph convolutional operator from the "Simplifying Graph Convolutional Networks" paper |
|
The GraphSAGE operator from the "Inductive Representation Learning on Large Graphs" paper |
|
The GATv2 operator from the "How Attentive are Graph Attention Networks?" paper, which fixes the static attention problem of the standard |
|
The graph convolutional operator with initial residual connections and identity mapping (GCNII) from the "Simple and Deep Graph Convolutional Networks" paper |
|
Approximate personalized propagation of neural predictions |
|
The relational graph convolutional operator from the "Modeling Relational Data with Graph Convolutional Networks" paper |
|
The graph attention operator from the "Attention-based Graph Neural Network for Semi-supervised Learning" paper |
|
The Jumping Knowledge layer aggregation module from the "Representation Learning on Graphs with Jumping Knowledge Networks" paper based on either |
|
The Heterogenous Graph Attention Operator from the "Heterogenous Graph Attention Network" paper. |
|
The chebyshev spectral graph convolutional operator from the "Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering" paper |
|
A generic wrapper for computing graph convolution on heterogeneous graphs. |
|
The SimpleHGN layer from the "Are we really making much progress? Revisiting, benchmarking, and refining heterogeneous graph neural networks" paper |
|
The Frequency Adaptive Graph Convolution operator from the "Beyond Low-Frequency Information in Graph Convolutional Networks" paper |
|
The graph propagation oeprator from the "Adaptive Universal Generalized PageRank Graph Neural Network" paper |
|
The Heterogeneous Graph Transformer (HGT) operator from the "Heterogeneous Graph Transformer" paper. |
|
The sparsified neighborhood mixing graph convolutional operator from the "MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing" paper |
|
The graph hard attentional operator from the "Graph Representation Learning via Hard and Channel-Wise Attention Networks" paper |
|
The Principal Neighbourhood Aggregation graph convolution operator from the "Principal Neighbourhood Aggregation for Graph Nets" paper |
|
The FiLM graph convolutional operator from the "GNN-FiLM: Graph Neural Networks with Feature-wise Linear Modulation" paper |
|
Paper: Composition-based Multi-Relational Graph Convolutional Networks |
|
The Edge Convolution operator from the "Dynamic Graph CNN for Learning on Point Clouds" paper |
|
The Heterogeneous Graph Propagation Operator from the "Heterogeneous Graph Propagation Network" paper. |
|
The graph isomorphism operator from the "How Powerful are Graph Neural Networks?" paper |
|
The Gaussian Mixture Model Convolution or MoNet operator from the "Geometric deep learning on graphs and manifolds using mixture model CNNs" paper |
|
ie-HGCN from paper Interpretable and Efficient Heterogeneous Graph Convolutional Network. |
|
The mgnni operator from the "Multiscale Graph Neural Networks with Implicit Layers" paper |
|
The graph convolutional operator from the "MA-GCL: Model Augmentation Tricks for Graph Contrastive Learning" paper |
Pooling Layers¶
Returns batch-wise graph-level-outputs by taking the channel-wise maximum across the node dimension, so that for a single graph \(\mathcal{G}_i\) its output is computed by |
|
Returns batch-wise graph-level-outputs by taking the channel-wise minimum across the node dimension, so that for a single graph \(\mathcal{G}_i\) its output is computed by |
|
Returns batch-wise graph-level-outputs by averaging node features across the node dimension, so that for a single graph \(\mathcal{G}_i\) its output is computed by |
|
Returns batch-wise graph-level-outputs by adding node features across the node dimension, so that for a single graph \(\mathcal{G}_i\) its output is computed by |
|
The global pooling operator from the "An End-to-End Deep Learning Architecture for Graph Classification" paper, where node features are sorted in descending order based on their last feature channel. |
Model¶
Graph Convolutional Network proposed in "Semi-supervised Classification with Graph Convolutional Networks" paper. |
|
The graph attentional operator from the "Graph Attention Networks" paper. |
|
simplifing graph convoluation nerworks |
|
The GraphSAGE operator from the "Inductive Representation Learning on Large Graphs" paper |
|
The graph convolutional operator with initial residual connections and identity mapping (GCNII) from the "Simple and Deep Graph Convolutional Networks" paper. |
|
Approximate personalized propagation of neural predictions |
|
The FiLM graph convolutional operator from the "GNN-FiLM: Graph Neural Networks with Feature-wise Linear Modulation" paper. |
|
relational graph convoluation nerworks |
|
Composition-based Multi-Relational graph convoluation nerworks |
|
The graph attention operator from the "Attention-based Graph Neural Network for Semi-supervised Learning" paper. |
|
Deep Graph Infomax in DGL |
|
Graph Convolutional Network proposed in "Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering" paper. |
|
This is a model SimpleHGN from Are we really making much progress? Revisiting, benchmarking, and refining heterogeneous graph neural networks paper. |
|
The Frequency Adaptive Graph Convolution operator from the "Beyond Low-Frequency Information in Graph Convolutional Networks" paper. |
|
Graph Convolutional Network proposed in "Adaptive Universal Generalized PageRank Graph Neural Network" paper. |
|
DGCNN proposed in "An End-to-End Deep Learning Architecture for Graph Classification" paper. |
|
MixHop proposed in "MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing" paper. |
|
The Heterogeneous Graph Transformer (HGT) proposed in here paper. |
|
The graph hard attentional operator from the "Graph Representation Learning via Hard and Channel-Wise Attention Networks" paper. |
|
The Edge Convolution operator from the "Dynamic Graph CNN for Learning on Point Clouds" paper. |
|
The FiLM graph convolutional operator from the "GNN-FiLM: Graph Neural Networks with Feature-wise Linear Modulation" paper. |
|
Provide adjacency matrix estimation implementation based on the Expectation-Maximization(EM) algorithm. |
|
The DeepWalk model from the "DeepWalk: Online Learning of Social Representations" paper where random walks of length |
|
The Node2Vec model from the "node2vec: Scalable Feature Learning for Networks" paper where random walks of length |
|
Applications of Variational Encoders on Graphs proposed in Variational Graph Auto-Encoders paper. |
|
Applications of Auto-Encoders on Graphs proposed in "Variational Graph Auto-Encoders" paper. |
|
HPN proposed in "Heterogeneous Graph Propagation Network" paper. |
|
The Gaussian Mixture Model or MoNet from the "Geometric deep learning on graphs and manifolds using mixture model CNNs" paper. |
|
calibration GCN proposed in "Be Confident! Towards Trustworthy Graph Neural Networks via Confidence Calibration" paper. |
|
CoGSL Model proposed in '"Compact Graph Structure Learning via Mutual Information Compression" <https://arxiv.org/pdf/2201.05540.pdf>'_ paper. |
|
The Specformer from the "Specformer:Spectral Graph Neural Networks Meet Transformers" paper |