Learning Parametrised Graph Shift Operators

Authors: George Dasoulas, Johannes F. Lutzeyer, Michalis Vazirgiannis

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the performance of the PGSO in a simulation study and on 8 real-world datasets. In Section 5.1, we show the ability of γ(A, S) to adapt to varying sparsity scenarios through a stochastic blockmodel study, in Section 5.2 we study the sensitivity of the GNN-PGSO s performance to different γ(A, S) initialisations and in Section 5.3 we evaluate the contribution of PGSO and m PGSO to the GNN performance in real-world datasets.
Researcher Affiliation Collaboration George Dasoulas 12, Johannes F. Lutzeyer 1 & Michalis Vazirgiannis1 1 Da Sci M, LIX, Ecole Polytechnique, Institute Polytechnique de Paris, France 2 Noah s Ark Lab, Huawei Technologies France {georgios.dasoulas, johannes.lutzeyer}@polytechnique.edu, mvazirg@lix.polytechnique.fr
Pseudocode No The paper describes mathematical formulations and examples of application (Equation 2, etc.) but does not include a formal pseudocode block or algorithm.
Open Source Code No The paper does not provide an explicit statement or link for open-source code for the described methodology.
Open Datasets Yes For node classification, we have used the well-examined datasets Cora and Cite Seer (Mc Callum et al., 2000; Giles et al., 1998) and ogbn-arxiv, that is a citation network from a recently popular collection of graph benchmarks, the Open Graph Benchmark (Hu et al., 2020). For graph classification, we have used the extensively-studied TU datasets MUTAG, PTC-MR, IMDBBINARY and IMDB-MULTI (Kersting et al., 2016) and OGBG-MOLHIV dataset from OGB (Hu et al., 2020).
Dataset Splits Yes For the node classification datasets Cora and Cite Seer, we performed cross validation with the train/validation/test splits being the same as in Kipf & Welling (2017), while for Ogbn-arxiv, we used the same splitting method used in Hu et al. (2020), according to the publication dates. For the TU datasets, we performed 10-fold cross validation with grid search hyper-parameter optimisation and for OGBG-MOLHIV, following Hu et al. (2020) we used a scaffold splitting approach and measured the validation ROC-AUC.
Hardware Specification No The paper discusses computational efficiency and numerical stability, but it does not specify any hardware details such as CPU, GPU, or TPU models used for experiments.
Software Dependencies No The paper mentions the use of the Adam optimizer but does not specify any programming languages, libraries, or their version numbers.
Experiment Setup Yes In our experiments, we use the Adam optimizer (Kingma & Ba, 2015) with a weight decay on the parameters of 5 × 10−4 and an initial learning rate of 0.005 for the exponential parameters and an initial learning rate of 0.01 for all other model parameters. The differentiation between the two learning rates is crucial for the model training, as the fluctuation of the exponential parameters has a higher impact on the model behavior than the rest of the model parameters. We, also, used a learning rate scheduler that decayed both the learning rates by 0.5 every 50 epochs. ... We use grid search to tune the hyper-parameters of the models (dimensionality of hidden units, number of GNN layers, number of epochs and the batch size). ... Model Design: The number of hidden units that was grid-searched was within {16, 32, 64}. The number of GNN layers that were used were within {1, 2, 3, 4}. Experiment Design: The number of epochs for the model training was {50, 100, 200}, batch size within {16, 32, 64} and a dropout ratio of 0.5.