Uncertainty Aware Graph Gaussian Process for Semi-Supervised Learning
Authors: Zhao-Yang Liu, Shao-Yuan Li, Songcan Chen, Yao Hu, Sheng-Jun Huang4957-4964
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on benchmarks demonstrate the effectiveness of the proposed approach. |
| Researcher Affiliation | Collaboration | Zhao-Yang Liu,1 Shao-Yuan Li,1 Songcan Chen,1 Yao Hu,2 Sheng-Jun Huang 1 1College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics Collaborative Innovation Center of Novel Software Technology and Industrialization Nanjing 211106, China 2You Ku Cognitive and Intelligent Lab, Alibaba Group, Hangzhou, China |
| Pseudocode | No | No explicit pseudocode or algorithm block was found in the paper. |
| Open Source Code | No | The paper does not provide any explicit statements about open-sourcing the code or links to a code repository. |
| Open Datasets | Yes | Described in Table 1, the three benchmark datasets are citation networks. ... Cora, Citeseer, Pubmed |
| Dataset Splits | No | The paper uses standard benchmark datasets (Cora, Citeseer, Pubmed) and mentions "Test Nodes" (1000) and "Label Rate" (percentage of labeled data) in Table 1. It states "We follow the exact experimental setup in(Ng, Colombo, and Silva 2018; Kipf and Welling 2017; Vashishth et al. 2019)", implying predefined splits from cited works, but does not explicitly detail the training/validation/test splits for its own method. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions "optimized using ADAM(Kingma and Ba 2015)" but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We also use the 3rd degree polynomial kernel function for the covariance, and the joint gradient based optimization of variational parameters and model parameters are optimized using ADAM(Kingma and Ba 2015). The value of parameter λ is set to 0.001 for Cora and Citeseer, for Pubmed, λ=0.0001. All experiments are repeated for 10 times and the mean values are recorded. |