Understanding Structural Vulnerability in Graph Convolutional Networks

Authors: Liang Chen, Jintang Li, Qibiao Peng, Yang Liu, Zibin Zheng, Carl Yang

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on four real-world datasets demonstrate that such a simple but effective method achieves the best robustness performance compared to state-of-the-art models.
Researcher Affiliation Academia 1Sun Yat-sen University 2Emory University
Pseudocode No The paper provides mathematical equations for the models but does not include any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes The codes are available at https://github.com/EdisonLeeeee/Median-GCN.
Open Datasets Yes The experiments are conducted on four benchmark datasets, including Cora-ML [Mc Callum et al., 2000], Cora, Citeseer and Pubmed [Sen et al., 2008] datasets.
Dataset Splits Yes The datasets are randomly split into training (10%), validation (10%), and testing (80%) set.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using the Adam algorithm but does not specify software dependencies like libraries or frameworks with their version numbers (e.g., 'PyTorch 1.9', 'Python 3.8').
Experiment Setup Yes The datasets are randomly split into training (10%), validation (10%), and testing (80%) set. For all models, the number of layers is set to 2 as suggested by previous works [Kipf and Welling, 2017; Velickovic et al., 2018] and the number of hidden units is 64. We employ Adam algorithm [Kingma and Ba, 2015] with an initial learning rate 0.01 to optimize all models. The number of training iterations is 200 with early stopping on the validation set. Following the setting of the work [Wu et al., 2019], the threshold of similarity for removing dissimilar edges is set to 0. The trimmed percentage α of our TMean is set to 0.45 to balance the accuracy and robustness.