Graph U-Nets

Authors: Hongyang Gao, Shuiwang Ji

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results on node classification and graph classification tasks demonstrate that our methods achieve consistently better performance than previous models.
Researcher Affiliation Academia 1Department of Computer Science & Engineering, Texas A&M University, TX, USA. Correspondence to: Hongyang Gao <hongyang.gao@tamu.edu>, Shuiwang Ji <sji@tamu.edu>.
Pseudocode No The paper describes the proposed operations and architecture through text and mathematical equations, but it does not include a separate pseudocode block or an explicitly labeled algorithm figure.
Open Source Code No The paper does not contain an explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We employ three benchmark datasets for this setting; those are Cora, Citeseer, and Pubmed (Kipf & Welling, 2017), which are summarized in Table 1. We use protein datasets including D&D (Dobson & Doig, 2003) and PROTEINS (Borgwardt et al., 2005), the scientific collaboration dataset COLLAB (Yanardag & Vishwanathan, 2015). These data are summarized in Table 2.
Dataset Splits Yes For each class, there are 20 nodes for training, 500 nodes for validation, and 1000 nodes for testing.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models, or memory specifications.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python version, library versions like PyTorch or TensorFlow, or CUDA version) used for the experiments.
Experiment Setup Yes For transductive learning tasks, we employ our proposed g-U-Nets proposed in Section 3.3. ... We sample 2000, 1000, 500, 200 nodes in the four g Pool layers, respectively. ... we apply L2 regularization on weights with λ = 0.001. Dropout (Srivastava et al., 2014) is applied to both adjacency matrices and feature matrices with keep rates of 0.8 and 0.08, respectively. ... We sample proportions of nodes in four g Pool layers; those are 90%, 70%, 60%, and 50%, respectively. The dropout keep rate imposed on feature matrices is 0.3.