Autofocused oracles for model-based design
Authors: Clara Fannjiang, Jennifer Listgarten
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | demonstrate the promise of autofocusing empirically. and demonstrate empirically that autofocusing holds promise for improving oracle-based design. |
| Researcher Affiliation | Academia | Clara Fannjiang and Jennifer Listgarten Department of Electrical Engineering & Computer Sciences University of California, Berkeley Berkeley, CA 94720 {clarafy,jennl}@berkeley.edu |
| Pseudocode | Yes | Pseudo-code for autofocusing can be found in the Supplementary Material (Algorithms 1 and 2). and See Algorithm 3 in the Supplementary Material for pseudocode of this procedure. |
| Open Source Code | Yes | Code for our experiments is available at https://github.com/clarafy/autofocused_oracles. |
| Open Datasets | Yes | we used a dataset comprising 21, 263 superconducting materials paired with their critical temperatures [44] |
| Dataset Splits | No | The paper mentions selecting 'training points' and evaluates 'best samples', but does not explicitly describe a separate validation split or how validation was performed for model tuning or early stopping. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, or cloud instance types) used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'gradient-boosted regression trees' and 'neural networks' but does not specify the software frameworks, libraries, or their version numbers (e.g., TensorFlow, PyTorch, scikit-learn versions) required to reproduce the experiments. |
| Experiment Setup | Yes | We outline our experiments here, with details deferred to the Supplementary Material S4. and In all cases, we used a full-rank multivariate normal for the search model, and flattened the importance weights used for autofocusing to wα i [24] with α = 0.2 to help control variance. and for our oracle, we used {(xi, yi)}n i=1 to train an ensemble of three neural networks that output both µβ(x) and σ2 β(x), to provide predictions of the form pβ(y | x) = N(µβ(x), σ2 β(x)) [46]. |