Strategic Classification with Graph Neural Networks

Authors: Itay Eilat, Ben Finkelshtein, Chaim Baskin, Nir Rosenfeld

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on several real networked datasets demonstrate the utility of our approach.
Researcher Affiliation Academia Itay Eilat , Ben Finkelshtein , Chaim Baskin, Nir Rosenfeld Technion Israel Institute of Technology {itayeilat,benfin}@campus.technion.ac.il {chaimbaskin,nirr}@cs.technion.ac.il
Pseudocode No The paper describes computational steps for single and multiple rounds but does not present them in a formally labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes Our code is publicly available at: http://github.com/Strategic GNNs/Code.
Open Datasets Yes We use three benchmark datasets used extensively in the GNN literature: Cora, Cite Seer, and Pub Med (Sen et al., 2008; Kipf & Welling, 2017), and adapt them to our setting.
Dataset Splits Yes All three datasets include a standard train-validation-test split, which we adopt for our use. For our purposes, we use make no distinction between train and validation , and use both sets for training purposes. ... In Table 2, the number of train samples is denoted ntrain, and the number of inductive test samples is denoted n test (all original transductive test sets include 1,000 samples).
Hardware Specification No The paper discusses the experimental setup and hyperparameters but does not mention specific hardware models like CPU or GPU types.
Software Dependencies No The paper mentions using 'Adam' for optimization but does not provide specific version numbers for any software, libraries, or dependencies.
Experiment Setup Yes We train using Adam and set hyperparameters according to Wu et al. (2019) (learning rate=0.2, weight decay=1.3 10 5). Training is stopped after 20 epochs (this usually suffices for convergence). Hyperparameters were determined based only on the train set: τ = 0.05, chosen to be the smallest value which retained stable training, and T = 3, as training typically saturates then (we also explore varying depths). We use β-scaled 2-norm costs, cβ(x, x ) = β x x 2, β R+, which induce a maximal moving distance of dβ = 2/β. We observed that values around d = 0.5 permit almost arbitrary movement; we therefore experiment in the range d [0, 0.5], but focus primarily on the mid-point d = 0.25 (note d = 0 implies no movement). Mean and standard errors are reported over five random initializations.