Stochastic Deep Gaussian Processes over Graphs

Authors: Naiqi Li, Wenjie Li, Jifeng Sun, Yinghua Gao, Yong Jiang, Shu-Tao Xia

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct thorough experimental study of our method.
Researcher Affiliation Academia 1Tsinghua-Berkeley Shenzhen Institute, Tsinghua University 2Shenzhen International Graduate School, Tsinghua University 3PCL Research Center of Networks and Communications, Peng Cheng Laboratory
Pseudocode No The paper describes the methodology using mathematical equations and derivations (e.g., Eq (1), (2), (3), (4), (5), (6)) but does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes The source code and evaluations of our work are publicly available 4. 4https://github.com/naiqili/DGPG
Open Datasets Yes We use the identical datasets and training/test splitting. Description of the datasets is presented in Table 2. ... We use the same data and splitting according to their released code5. 5https://github.com/liyaguang/DCRNN ... Dataset Weather [44], f MRI [45] and ETEX [46].
Dataset Splits Yes We totally generate 500 input-output signals for training, and 200 for testing. ... Description of the datasets is presented in Table 2. ... One advantage of DGPG is that we do not have strong necessity to use a validation set, so similar to [18] we utilize the validation dataset for training while fixing its layer to be 3.
Hardware Specification Yes All experiments were performed on a Ubuntu server equipped with Ge Force GTX 1080 Ti GPU, which supports our parallelized implementation.
Software Dependencies No Our implementation is based on the GPflow framework [43] and the implementation of DGP 3. The paper mentions software frameworks but does not specify their version numbers.
Experiment Setup Yes We trained a DGPG model with 1 layer and 50 inducing points, repeated over 5 runs. ... We use M = 5 inducing points. ... We tested our model with 1 to 4 layers, and tried the RBF and Matérn32 kernels. ... We use M = 20 inducing points per node. ... #Iter. 2000 5000 5000