Learning MLPs on Graphs: A Unified View of Effectiveness, Robustness, and Efficiency

Authors: Yijun Tian, Chuxu Zhang, Zhichun Guo, Xiangliang Zhang, Nitesh Chawla

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments and theoretical analyses demonstrate the superiority of NOSMOG by comparing it to GNNs and the state-of-the-art method in both transductive and inductive settings across seven datasets.
Researcher Affiliation Academia Yijun Tian1, Chuxu Zhang2, Zhichun Guo1, Xiangliang Zhang1, Nitesh V. Chawla1 1University of Notre Dame, 2Brandeis University
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code Yes Codes are available at https://github.com/meettyj/NOSMOG.
Open Datasets Yes We use five widely used public benchmark datasets (i.e., Cora, Citeseer, Pubmed, A-computer, and A-photo) (Zhang et al., 2022b; Yang et al., 2021), and two large OGB datasets (i.e., Arxiv and Products) (Hu et al., 2020) to evaluate the proposed model.
Dataset Splits Yes We adopt accuracy to measure the model performance, use validation data to select the optimal model, and report the results on test data.
Hardware Specification No The paper does not explicitly describe the hardware used for experiments.
Software Dependencies No The paper does not explicitly state specific software dependencies with version numbers.
Experiment Setup No The paper does not explicitly provide details about the experimental setup such as hyperparameters or training settings in the main text.