Adversarially Robust Optimization with Gaussian Processes

Authors: Ilija Bogunovic, Jonathan Scarlett, Stefanie Jegelka, Volkan Cevher

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally demonstrate several potential applications of interest using real-world data sets, and we show that STABLEOPT consistently succeeds in finding a stable maximizer where several baseline methods fail.
Researcher Affiliation Academia Ilija Bogunovic LIONS, EPFL ilija.bogunovic@epfl.ch Jonathan Scarlett National University of Singapore scarlett@comp.nus.edu.sg Stefanie Jegelka MIT CSAIL stefje@mit.edu Volkan Cevher LIONS, EPFL volkan.cevher@epfl.ch
Pseudocode Yes Algorithm 1 The STABLEOPT algorithm
Open Source Code No The paper does not provide an explicit statement of code release or a link to a repository for the STABLEOPT algorithm or its variations. It refers to 'publicly available code' in the context of the robot pushing experiment, but this is for a third-party objective function from [35], not their own implementation.
Open Datasets Yes We use the Movie Lens-100K dataset, which consists of 1682 movies and 943 users.
Dataset Splits No The paper mentions 'We use 10% of the user data for training' for the Movie Lens-100K dataset, but it does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, and testing across its experiments.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers (e.g., PyTorch 1.9, TensorFlow 2.x, or specific GP libraries with versions) that would be needed to replicate the experiment.
Experiment Setup Yes As observed in existing works (e.g., [7,31]), the theoretical choice of βt is overly conservative. We therefore adopt a constant value of β1/2 t = 2.0 in each of the above methods... The observation noise standard deviation is set to 0.1, and the number of sampling rounds is T = 100. We perform a fully Bayesian treatment of the hyperparameters, sampling every 10 rounds as in [17,35].