Generalizing Downsampling from Regular Data to Graphs

Authors: Davide Bacciu, Alessio Conte, Francesco Landolfi

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We leverage these concepts to define a graph pooling mechanism that we empirically assess in graph classification tasks, providing a greedy algorithm that allows efficient parallel implementation on GPUs, and showing that it compares favorably against pooling methods in literature.Section 6 Experimental Analysis Table 1 summarizes the average classification accuracy obtained on selected classification benchmarks
Researcher Affiliation Academia Davide Bacciu, Alessio Conte, Francesco Landolfi* Department of Computer Science, Universit a di Pisa
Pseudocode Yes Algorithm 1 Parallel Greedy k-MIS algorithm, adapted from Blelloch, Fineman, and Shun (2012).Algorithm 2 Parallel k-MIS partitioning algorithm.
Open Source Code No No explicit statement or link to the paper's own source code repository was found.
Open Datasets Yes DD (Dobson and Doig 2003), GITHUBSTARGAZERS (Rozemberczki, Kiss, and Sarkar 2020), REDDIT-BINARY and -MULTI-5K/12K (Yanardag and Vishwanathan 2015)
Dataset Splits Yes All datasets were divided in training (70%), validation (10%), and test (20%) sets using a randomized stratified split with fixed seed.
Hardware Specification No The paper states 'on a single GPU' but does not provide specific model numbers or detailed hardware specifications.
Software Dependencies No All models have been implemented and trained using Py Torch (Paszke et al. 2019) and Py Torch Geometric (Fey and Lenssen 2019). Specific version numbers for these libraries are not provided.
Experiment Setup No The paper describes the general model architecture, optimizer (Adam), and that hyperparameters like the reduction factor (k or r) were chosen during model selection, but it does not provide specific numerical values for hyperparameters such as learning rate, batch size, or exact training epochs.