Conformal Inductive Graph Neural Networks

Authors: Soroush H. Zargarbashi, Aleksandar Bojchevski

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6 EXPERIMENTAL EVALUATION
Researcher Affiliation Academia Soroush H. Zargarbashi CISPA Helmholtz Center for Information Security zargarbashi@cs.uni-koeln.de Aleksandar Bojchevski University of Cologne bojchevski@cs.uni-koeln.de
Pseudocode Yes Algorithm 1: Node Ex and Edge Ex CP for inductive node classification
Open Source Code Yes The code is accessible at github.com/soroushzargar/conformal-node-classification.
Open Datasets Yes We consider 9 different datasets and 4 models: GCN Kipf & Welling (2017), GAT Veliˇckovi c et al. (2018), and APPNPKlicpera et al. (2019) as structure-aware and MLP as a structure-independent model. We evaluate our Node Ex, and Edge Ex CP on the common citation graphs Cora ML Mc Callum et al. (2004), Cite Seer Sen et al. (2008), Pub Med Namata et al. (2012), Coauthor Physics and Coauthor CS Shchur et al. (2018), and co-purchase graphs Amazon Photos and Computers Mc Auley et al. (2015); Shchur et al. (2018) (details in E).
Dataset Splits Yes For any of the mentioned datasets, we sample 20 nodes per class for training and 20 nodes for validation with stratified sampling.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory, or cloud instance types) used for running the experiments.
Software Dependencies No The paper mentions software components like "Adam optimizer" and "categorical cross-entropy loss" but does not provide specific version numbers for these or other libraries/frameworks (e.g., PyTorch, TensorFlow, Python version).
Experiment Setup Yes For all architectures, we built one hidden layer of 64 units and one output layer. We applied dropout on the hidden layer with probability 0.6 for GCN, and GAT, 0.5 for APPNPNet, and 0.8 for MLP. For GAT we used 8 heads. We trained all models with categorical cross-entropy loss, and Adam optimizer with L2 regularization.