Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Variational Inference for Gaussian Process Models with Linear Complexity
Authors: Ching-An Cheng, Byron Boots
NeurIPS 2017 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We run several experiments on regression tasks and show that this decoupled approach greatly outperforms previous sparse variational Gaussian process inference procedures. |
| Researcher Affiliation | Academia | Ching-An Cheng Institute for Robotics and Intelligent Machines Georgia Institute of Technology Atlanta, GA 30332 EMAIL Byron Boots Institute for Robotics and Intelligent Machines Georgia Institute of Technology Atlanta, GA 30332 EMAIL |
| Pseudocode | Yes | Algorithm 1 Online Learning with DGPs Parameters: Mα, Mβ, Nm, N Input: M(a, B, α, β, θ) 1: θ0 initialize Hyperparameters( sample Minibatch(D, Nm) ) 2: for t = 1 . . . T do 3: Dt sample Minibatch(D, Nm) 4: M.add Basis(Dt, N , Mα, Mβ) 5: M.update Model(Dt, t) 6: end for |
| Open Source Code | No | The paper does not provide any explicit statement about releasing its source code, nor does it include a link to a code repository for the methodology described. |
| Open Datasets | Yes | Inverse Dynamics of KUKA Robotic Arm This dataset records the inverse dynamics of a KUKA arm performing rhythmic motions at various speeds [17]. Walking Mu Jo Co Mu Jo Co (Multi-Joint dynamics with Contact) is a physics engine for research in robotics, graphics, and animation, created by [25]. |
| Dataset Splits | No | The paper specifies 90% training data and 10% testing data for the KUKA1 and MUJOCO datasets, but it does not explicitly mention a separate validation dataset split. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions "our current Matlab implementation" but does not specify the Matlab version or any other software dependencies with version numbers. |
| Experiment Setup | Yes | The step-size for each stochastic algorithms is scheduled according to γt = γ0(1 + 0.1 t) 1, where γ0 {10 1, 10 2, 10 3} is selected manually for each algorithm to maximize the improvement in objective function after the first 100 iterations. We test each stochastic algorithm for T = 2000 iterations with mini-batches of size Nm = 1024 and the increment size N = 128. Finally, the model sizes used in the experiments are listed as follows: Mα = 1282 and Mβ = 128 for SVDGP; M = 1024 for SVI; M = 256 for i VSGPR; M = 1024, N = 4096 for VSGPR; N = 1024 for GP. |