Graph Information Bottleneck

Authors: Tailin Wu, Hongyu Ren, Pan Li, Jure Leskovec

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare GIB-Cat and GIB-Bern with baselines including GCN [3] and GAT [5], the most relevant baseline as GIB-Cat and GIB-Bern are to impose the GIB principle over GAT. In addition, we consider two state-of-the-art graph defense models specifically designed against adversarial attacks: GCNJaccard [34] that pre-processes the graph by deleting the edges between nodes with low feature similarity, and Robust GCN (RGCN) [33] that uses Gaussian reparameterization for node features and variance-based attention. Note that RGCN essentially includes the term XIB (Eq. (3)) to control the information of node features while it does not have the term AIB (Eq. (3)) to control the structural information. For GCNJaccard and RGCN, we perform extensive hyperparameter search as detailed in Appendix G.3. For GIB-Cat and GIB-Bern, we keep the same architectural component as GAT, and for the additional hyperparameters k and T (Algorithm 1, 2 and 3), we search k 2 {2, 3} and T 2 {1, 2} for each experimental setting and report the better performance. Please see Appendix G for more details. We use three citation benchmark datasets: Cora, Pubmed and Citeseer [43], in our evaluation.
Researcher Affiliation Academia Department of Computer Science Stanford University
Pseudocode Yes Algorithm 1: Framework of GIB-Cat and GIB-Bern; Algorithm 2: Neighbor Sample (categorical); Algorithm 3: Neighbor Sample (Bernoulli)
Open Source Code Yes Project website and code can be found at http://snap.stanford.edu/gib/.
Open Datasets Yes We use three citation benchmark datasets: Cora, Pubmed and Citeseer [43], in our evaluation.
Dataset Splits Yes We follow the standard transductive node classification setting and standard trainvalidation-test split as GAT [5]. ... We always use F1-micro as the validating metric to train our model.
Hardware Specification No The paper does not explicitly mention any specific hardware specifications such as GPU or CPU models used for the experiments.
Software Dependencies No The paper mentions using PyTorch in the acknowledgments and references, but it does not specify the version numbers for PyTorch or any other software dependencies crucial for reproducibility in the main text.
Experiment Setup Yes For GIB-Cat and GIB-Bern, we keep the same architectural component as GAT, and for the additional hyperparameters k and T (Algorithm 1, 2 and 3), we search k 2 {2, 3} and T 2 {1, 2} for each experimental setting and report the better performance. Please see Appendix G for more details.