Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Dirichlet Energy Constrained Learning for Deep Graph Neural Networks

Authors: Kaixiong Zhou, Xiao Huang, Daochen Zha, Rui Chen, Li Li, Soo-Hyun Choi, Xia Hu

NeurIPS 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we empirically evaluate the effectiveness of EGNN on real-world datasets. We aim to answer the following questions. Q1: How does our EGNN compare with the state-of-the-art deep GNN models? Q2: Whether or not the Dirichlet energy at each layer of EGNN satisfies the constrained learning? Q3: How does each component of EGNN affect the model performance? Q4: How do the model hyperparameters impact the performance of EGNN?
Researcher Affiliation Collaboration Kaixiong Zhou Rice University EMAIL Xiao Huang The Hong Kong Polytechnic University EMAIL Daochen Zha Rice University EMAIL Rui Chen Samsung Research America EMAIL Li Li Samsung Research America EMAIL Soo-Hyun Choi Samsung Electronics EMAIL Xia Hu Rice University EMAIL
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper states 'We implement all the baselines using Pytorch Geometric [55] based on their official implementations,' but does not explicitly state that the source code for their own method (EGNN) is publicly available or provide a link.
Open Datasets Yes Following the practice of previous work, we evaluate EGNN by performing node classification on four benchmark datasets: Cora, Pubmed [52], Coauthor-Physics [53] and Ogbn-arxiv [54].
Dataset Splits Yes We choose hyperparameters cmax, cmin, γ and b based on the validation set.
Hardware Specification No The paper does not provide any specific hardware details (like exact GPU/CPU models or types) used for running its experiments.
Software Dependencies No The paper mentions 'Pytorch Geometric [55]' but does not specify a version number for this or any other software dependency.
Experiment Setup Yes We choose hyperparameters cmax, cmin, γ and b based on the validation set. For the weight initialization, we set cmax to be 1 for all the datasets; that is, the trainable weights are initialized as identity matrices at all the graph convolutional layers. The loss hyperparameter γ is 20 in Cora, Pubmed and Coauthor-Physics to strictly regularize towards the orthogonal matrix; and it is 10 4 in Ogbn-arxiv to improve the model s learning ability. For the lower-bounded residual connection, we choose residual strength cmin from range [0.1, 0.75] and list the details in Appendix. The trainable shift b is initialized with 10 in Cora and Pubmed; it is initialized to 5 and 1 in Coauthor-Physics and Ogbn-arxiv, respectively.