Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Training Graph Neural Networks Subject to a Tight Lipschitz Constraint

Authors: Simona Ioana Juvina, Ana Antonia Neacșu, Jérôme Rony, Jean-Christophe Pesquet, Corneliu Burileanu, Ismail Ben Ayed

TMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We report experiments on various datasets in the context of node classification tasks, showing the effectiveness of our constrained GNN model. ... We report mean accuracy with standard errors calculated from 10 seeded splits, with a training-validation-testing split ratio of 60%, 20%, and 20%, respectively. For both architectures, we trained models subject to various Lipschitz bound constraints (ϑ [1, 30]), to find the best trade-off between robustness and performance.
Researcher Affiliation Academia 1National University of Science and Technology Politehnica Bucharest, Spee D 2Université Paris-Saclay, Inria, Centrale Supélec, CVN 3ÉTS Montréal
Pseudocode Yes Algorithm 1 Accelerated version of the Dual Forward-Backward (DFB) algorithm
Open Source Code Yes A full Py Torch implementation is available at https://github.com/simona-juvina/lipschitz-gnn.
Open Datasets Yes The considered datasets are Facebook (Rozemberczki et al., 2021), Git Hub (Rozemberczki et al., 2021), Last FM Asia (Rozemberczki & Sarkar, 2020), and Deezer Europe (Rozemberczki & Sarkar, 2020). These graphs are attributed, allow for binary and multi-class node classification, and are variable in size and density. More details about the datasets are provided in Appendix C.1.
Dataset Splits Yes For all datasets, we conducted experiments using our two proposed networks: robust GCN and robust Graph SAGE. We report mean accuracy with standard errors calculated from 10 seeded splits, with a training-validation-testing split ratio of 60%, 20%, and 20%, respectively.
Hardware Specification Yes All experiments conducted in this paper were performed on an NVIDIA A100 80GB GPU.
Software Dependencies No The paper mentions 'Py Torch implementation' and 'Adam optimizer' and 'Auto-PGD attack' but does not specify their version numbers. For example, 'Adam optimizer (Kingma & Ba, 2015)' is cited, but no version number for the software is provided.
Experiment Setup Yes Unless otherwise stated, we consider networks with m = 3 layers and hidden feature dimension Ni = 16 for i {1, 2}. ... Table 6: Training hyperparameters Parameter Value Training Hidden dimension 16 Max num. epochs 2000 Early stopping patience 200 Initial learning rate 0.01 Learning rate patience 100 DFB algorithm Max num. iterations 100 α 2.1 Max. Lipschitz difference 0.01