Robust Graph Representation Learning via Neural Sparsification
Authors: Cheng Zheng, Bo Zong, Wei Cheng, Dongjin Song, Jingchao Ni, Wenchao Yu, Haifeng Chen, Wei Wang
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on both benchmark and private datasets show that Neural Sparse can yield up to 7.2% improvement in testing accuracy when working with existing graph neural networks on node classification tasks. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science, University of California, Los Angeles, CA, USA 2NEC Laboratories America, Princeton, NJ, USA. |
| Pseudocode | Yes | Algorithm 1 Training algorithm for Neural Sparse |
| Open Source Code | No | The paper mentions 'The supplementary material contains more experimental details.' but does not explicitly state that source code for the methodology is provided or offer a link to a code repository. |
| Open Datasets | Yes | We employ five datasets from various domains and conduct the node classification task following the settings as described in Hamilton et al. (2017) and Kipf & Welling (2017). The dataset statistics are summarized in Table 1. |
| Dataset Splits | Yes | The dataset statistics are summarized in Table 1. ... Training Nodes, Validation Nodes, Testing Nodes are listed for each dataset. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory, or cloud instances) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'tensorflow' as a deep learning framework but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) used in their experimental setup. |
| Experiment Setup | Yes | Temperature tuning. We anneal the temperature with the schedule τ = max(0.05, exp( rp)), where p is the training epoch and r 10{ 5, 4, 3, 2, 1}. τ is updated every N steps and N {50, 100, ..., 500}. ... For Reddit, PPI, Transaction, Cora, and Citeseer, the hyperparameter k is set as 30, 15, 10, 5, and 3 respectively. The hyper-parameter l is set as 1. |