### About this deal

The Efficient Graph Convolution from the "Adaptive Filters and Aggregator Fusion for Efficient Graph Convolutions" paper. The spectral modularity pooling operator from the "Graph Clustering with Graph Neural Networks" paper. The continuous kernel-based convolutional operator from the "Neural Message Passing for Quantum Chemistry" paper.

The Neural Fingerprint model from the "Convolutional Networks on Graphs for Learning Molecular Fingerprints" paper to generate fingerprints of molecules.Applies graph normalization over individual graphs as described in the "GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training" paper. The equilibrium aggregation layer from the "Equilibrium Aggregation: Encoding Sets via Optimization" paper. The Attentive FP model for molecular representation learning from the "Pushing the Boundaries of Molecular Representation for Drug Discovery with the Graph Attention Mechanism" paper, based on graph attention mechanisms.

The heterogeneous edge-enhanced graph attentional operator from the "Heterogeneous Edge-Enhanced Graph Attention Network For Multi-Agent Trajectory Prediction" paper. During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution.Performs LSTM-style aggregation in which the elements to aggregate are interpreted as a sequence, as described in the "Inductive Representation Learning on Large Graphs" paper. The Gini coefficient from the "Improving Molecular Graph Neural Network Explainability with Orthonormalization and Induced Sparsity" paper. g., the j j j-th channel of the i i i-th sample in the batched input is a 1D tensor input [ i , j ] \text{input}[i, j] input [ i , j ]). The DistMult model from the "Embedding Entities and Relations for Learning and Inference in Knowledge Bases" paper. The path integral based convolutional operator from the "Path Integral Based Convolution and Pooling for Graph Neural Networks" paper.

The Graph Neural Network from the "Dynamic Graph CNN for Learning on Point Clouds" paper, using the EdgeConv operator for message passing. Applies the Softplus function Softplus ( x ) = 1 β ∗ log ( 1 + exp ( β ∗ x ) ) \text{Softplus}(x) = \frac{1}{\beta} * \log(1 + \exp(\beta * x)) Softplus ( x ) = β 1 ∗ lo g ( 1 + exp ( β ∗ x )) element-wise. Performs GRU aggregation in which the elements to aggregate are interpreted as a sequence, as described in the "Graph Neural Networks with Adaptive Readouts" paper. Creates a criterion that measures the triplet loss given an input tensors x 1 x1 x 1, x 2 x2 x 2, x 3 x3 x 3 and a margin with a value greater than 0 0 0. The Deep Graph Infomax model from the "Deep Graph Infomax" paper based on user-defined encoder and summary model \(\mathcal{E}\) and \(\mathcal{R}\) respectively, and a corruption function \(\mathcal{C}\).The differentiable pooling operator from the "Hierarchical Graph Representation Learning with Differentiable Pooling" paper.

A sampling algorithm from the "PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space" paper, which iteratively samples the most distant point with regard to the rest points. The fused graph attention operator from the "Understanding GNN Computational Graph: A Coordinated Computation, IO, and Memory Perspective" paper. Allows the model to jointly attend to information from different representation subspaces as described in the paper: Attention Is All You Need. The local extremum graph neural network operator from the "ASAP: Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representations" paper.

Not only do girls here achieve academic excellence but they enjoy contributing to the school and wider community.