Information Obfuscation of Graph Neural Networks

Authors: Peiyuan Liao, Han Zhao, Keyulu Xu, Tommi Jaakkola, Geoffrey J. Gordon, Stefanie Jegelka, Ruslan Salakhutdinov

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments across multiple datasets from recommender systems, knowledge graphs and quantum chemistry demonstrate that the proposed approach provides a robust defense across various graph structures and tasks, while producing competitive GNN encoders for downstream tasks.
Researcher Affiliation Academia 1Carnegie Mellon University 2University of Illinois at Urbana-Champaign 3Massachusetts Institute of Technology (MIT).
Pseudocode Yes In both cases, we present the pseudocode in Fig. 2 and Alg. 1, which also includes extensions to neighborhood and n-hop attacks (cf. Section 4.3).
Open Source Code Yes 1Code at: https://github.com/liaopeiyuan/GAL
Open Datasets Yes We experiment on 5 link-prediction benchmarks (Movielens-1M, FB15k-237, WN18RR, Cite Seer, Pubmed) and 1 graph regression benchmark (QM9), which covers both obfuscation of single and multiple attributes.
Dataset Splits No No explicit details about validation dataset splits (percentages, counts, or methodology) are provided.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running experiments are explicitly mentioned.
Software Dependencies No The paper mentions software like PyTorch and PyTorch Geometric implicitly through references, but does not specify their version numbers or other software dependencies required for replication.
Experiment Setup Yes Detailed experimental setups may be found in Appendix B. and Appendix B itself specifies: For Movielens-1M, we use a 2-layer GCN/ChebNet/GraphSAGE/CompGCN with hidden dimension of 128. For FB15K-237 and WN18RR, we use 2-layer CompGCN with hidden dimension of 200. For QM9, we use 3-layer MPNN with hidden dimension of 200. For CiteSeer and Pubmed, we use 2-layer GCN/GAT/ChebNet with hidden dimension of 128. All models are trained with Adam optimizer with learning rate 0.001. We also apply Dropout (Srivastava et al., 2014) of 0.5 to all hidden layers. We train all models for 200 epochs.