Robust Node Classification on Graph Data with Graph and Label Noise

Authors: Yonghua Zhu, Lei Feng, Zhenyun Deng, Yang Chen, Robert Amor, Michael Witbrock

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Furthermore, We numerically validate the superiority of our method in terms of robust node classification compared with all comparison methods.
Researcher Affiliation Academia NAOInstitute, University of Auckland, NZ School of Computer Science, University of Auckland, NZ School of Computer Science and Engineering, Nanyang Technological University, Singapore Department of Computer Science, University of Cambridge, UK
Pseudocode Yes Algorithm 1: The pseudo-code of our RNCGLN method.
Open Source Code Yes Our code and comprehensive theoretical version are available at: https://github.com/yhzhu66/RNCGLN
Open Datasets Yes We evaluate the robustness of our proposed method1 on four popular datasets, including three citation datasets (i.e., Cora, Citedeer, Pubmed) (Sen et al. 2008) and one amazon sale dataset (i.e., Photo) (Shchur et al. 2018).
Dataset Splits No The paper uses well-known datasets but does not explicitly state the training, validation, and test dataset splits (e.g., percentages or sample counts).
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions software components like MLP and activation functions but does not provide specific version numbers for any software dependencies (e.g., Python version, library versions like PyTorch or TensorFlow).
Experiment Setup No The paper mentions the existence of several hyperparameters such as α, τ, τg1, τg2, τp1, and τp2, and discusses the warm-up period in terms of epochs. However, it does not provide concrete values for common experimental setup parameters like learning rate, batch size, number of epochs, or optimizer settings.