Offline Model-Based Optimization via Normalized Maximum Likelihood Estimation
Authors: Justin Fu, Sergey Levine
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we evaluate our method on a selection of tasks from the Design Benchmark (Anonymous, 2021), where we show that our method performs competitively with state-of-the-art baselines. |
| Researcher Affiliation | Academia | Justin Fu & Sergey Levine Department of Electrical Engineering and Computer Science University of California, Berkeley {justinjfu,svlevine}@eecs.berkeley.edu |
| Pseudocode | Yes | We outline the high-level pseudocode in Algorithm 1, and presented a more detailed implementation in Appendix A.2.1. |
| Open Source Code | Yes | Our code is available at https://sites.google.com/view/nemo-anonymous |
| Open Datasets | Yes | We evaluated on 6 tasks from the Design-bench (Anonymous, 2021), modeled after real-world design problems for problems in materials engineering (Hamidieh, 2018), biology (Sarkisyan et al., 2016), and chemistry (Gaulton et al., 2012) |
| Dataset Splits | No | The paper mentions evaluating on tasks from Design-bench and lists datasets but does not provide specific details on how the data was split into training, validation, and test sets, or if standard splits were used without explicit description. |
| Hardware Specification | No | The paper mentions |
| Software Dependencies | No | The paper mentions using |
| Experiment Setup | Yes | The details for the tasks, baselines, and experimental setup are as follows, and hyperparameter choices with additional implementation details can be found in Appendix A.2.2. The table includes: Learning rate αθ, Learning rate αx, Network Size, Discretization K, Batch size M, Target update rate τ. Also |