Revisiting Active Sets for Gaussian Process Decoders
Authors: Pablo Moreno-Muñoz, Cilie Feldager, Søren Hauberg
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the performance of the SAS approach for stochastic learning of GP decoders, both the deterministic GP-LVM (Lawrence, 2004) and its Bayesian counterpart (Titsias and Lawrence, 2010). For this purpose, we consider three different datasets: MNIST (Le Cun et al., 1998), FASHION MNIST (Xiao et al., 2017) and CIFAR-10 (Krizhevsky, 2009). |
| Researcher Affiliation | Academia | Pablo Moreno-Muñoz Cilie W. Feldager Søren Hauberg Section for Cognitive Systems Technical University of Denmark (DTU) {pabmo,cife,sohau}@dtu.dk |
| Pseudocode | Yes | Algorithm 1 SAS for GP decoders Algorithm 2 SAS for Bayesian GP decoders |
| Open Source Code | Yes | We also provide PYTORCH code that allows for reproducing all experiments and models.6 The code is publicly available in the repository: https://github.com/pmorenoz/SASGP/ including baselines. |
| Open Datasets | Yes | For this purpose, we consider three different datasets: MNIST (Le Cun et al., 1998), FASHION MNIST (Xiao et al., 2017) and CIFAR-10 (Krizhevsky, 2009). |
| Dataset Splits | No | The paper mentions using well-known datasets but does not explicitly state the validation split percentages, absolute counts, or specific methodology used for validation. |
| Hardware Specification | No | The paper states 'We observe convergence in less that 2h runtime in CPU for most cases.' and 'when reporting running times we consistently used 64 bits', but does not provide specific details on the CPU model, GPU models, memory, or other hardware components used for experiments. |
| Software Dependencies | No | The paper mentions 'PYTORCH code' and 'ADAM optimizer (Kingma and Ba, 2015)' but does not specify exact version numbers for these software dependencies, which is required for reproducibility. |
| Experiment Setup | Yes | In all experiments, the learning rates are set in the range [10 4, 10 2], the maximum number of epochs considered is 300 and we use the ADAM optimizer (Kingma and Ba, 2015). For SAS experiments, we only consider batch sizes greater than the active set size, as this is a requirement for SAS. |