Revisiting the Sample Complexity of Sparse Spectrum Approximation of Gaussian Processes

Authors: Minh Hoang, Nghia Hoang, Hai Pham, David Woodruff

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our proposed method on several benchmarks with promising results supporting our theoretical analysis.
Researcher Affiliation Collaboration Quang Minh Hoang Department of Computer Science Carnegie-Mellon University Pittsburgh, PA 15213 qhoang@andrew.cmu.edu Trong Nghia Hoang MIT-IBM Watson AI Lab IBM Research Cambridge, MA 02142 nghiaht@ibm.com Hai Pham Language Technologies Institute Carnegie-Mellon University Pittsburgh, PA 15213 htpham@cs.cmu.edu David P. Woodruff Department of Computer Science Carnegie-Mellon University Pittsburgh, PA 15213 dwoodruf@cs.cmu.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks with clear labels like 'Algorithm' or 'Pseudocode'.
Open Source Code Yes Our experimental code is released at https://github.com/hqminh/gp_sketch_nips.
Open Datasets Yes Datasets. This section presents our empirical studies on two real datasets: (a) the ABALONE dataset [42]... and (b) the GAS SENSOR dataset [5, 6]...
Dataset Splits No The paper mentions training data and performance metrics but does not provide specific details on how the data was split into training, validation, and test sets (e.g., percentages, counts, or cross-validation methodology).
Hardware Specification Yes All reported performances were averaged over 5 independent runs on a computing server with a Tesla K40 GPU with 12GB RAM.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup No The paper states 'The detailed parameterization of our entire algorithm7 is provided in Appendix C.' which means the specific experimental setup details are not present in the main text.