Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Characterizing Deep Gaussian Processes via Nonlinear Recurrence Systems
Authors: Anh Tong, Jaesik Choi9915-9922
AAAI 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate our finding with a number of experimental results. and We justify our findings with numerical experiments. and 6 Experimental Results This section verifies our theoretical claims empirically. |
| Researcher Affiliation | Collaboration | Anh Tong1, Jaesik Choi2, 3 1 Ulsan National Institute of Science and Technology 2 Korea Advanced Institute of Science and Technology 3 INEEJI EMAIL, EMAIL |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include an unambiguous statement or link indicating that the authors are releasing the source code for the methodology described in this paper. |
| Open Datasets | Yes | We trained our models on Boston housing data set (Dheeru and Karra Taniskidou 2017) and diabetes data set (Efron et al. 2004). and We test on MNIST data set (Le Cun and Cortes 2010). All these datasets are standard public benchmarks and cited. |
| Dataset Splits | No | For each data set, we train our models with 90% of the data set and hold out the remaining for testing. This only describes a train/test split, not a separate validation split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | All kernels and models are developed based on GPy Torch library (Gardner et al. 2018). This mentions a software library but does not provide its version number or versions for other dependencies. |
| Experiment Setup | Yes | We learned the models where the number of layers, N, ranges from 2 to 6 and the number of units per layer, m, is from 2 to 9. and The inference algorithm is based on (Salimbeni and Deisenroth 2017). and The kernel hyperparameter σ2 is set to 1 while 1/ℓ2 runs from 0.1 to 5. and Here, we only consider the case m = 1. and The number of units per layer, m, is chosen as m = 30. We consider the number of layers, N = 2, 3, 4. and constrain coefficient 0 < c0 < 1. |