Model Inversion Networks for Model-Based Optimization

Authors: Aviral Kumar, Sergey Levine

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate MINs on highdimensional model-based optimization problems over images, protein designs, and neural network controller parameters, and bandit optimization from logged data. We experimentally demonstrate MINs in a range of settings, showing that they outperform prior methods on high-dimensional input spaces, such as images, neural network parameters, and protein designs, and substantially outperform prior methods on contextual bandit optimization from logged data.
Researcher Affiliation Academia Aviral Kumar, Sergey Levine Electrical Engineering and Computer Sciences, UC Berkeley aviralk@berkeley.edu
Pseudocode Yes Algorithm 1 Generic Algorithm for MINs; Algorithm 2 Active MINs with Randomized Labeling
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of the MINs methodology itself. It references external codebases like PyTorch GAN and Bandit Net.
Open Datasets Yes We evaluate on two datasets, which are formed by: (1) selecting random labels xi for each context ci; (2) selecting the correct label 49% of the time, which matches the protocol in [15].; MNIST [17] dataset.; IMDB-Wiki faces [26] dataset; We use the trained scoring oracles released by [4].
Dataset Splits No The paper mentions 'training examples' and 'test dataset' but does not explicitly provide details about specific training/validation/test splits (e.g., percentages, sample counts, or a clear citation for the split used).
Hardware Specification No The paper mentions 'compute support from Google, Amazon, and NVIDIA' in the Acknowledgements, but does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'implemented with PyTorch' in Appendix D.1, but does not specify its version number or other software dependencies with specific versions.
Experiment Setup Yes For training, we use the Adam optimizer with lr = 0.0001.