Importance-aware Co-teaching for Offline Model-based Optimization

Authors: Ye Yuan, Can (Sam) Chen, Zixuan Liu, Willie Neiswanger, Xue (Steve) Liu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental ICT achieves state-of-the-art results across multiple design-bench tasks, achieving the best mean rank of 3.1 and median rank of 2, among 15 methods. Our source code can be found here. 4 Experimental Results
Researcher Affiliation Academia 1 Mc Gill University, 2 MILA Quebec AI Institute, 3 University of Washington, 4 Stanford University
Pseudocode Yes A detailed depiction of the entire algorithm can be found in Algorithm 1.
Open Source Code Yes Our source code can be found here.
Open Datasets Yes In this study, we conduct experiments on four continuous tasks and three discrete tasks. The continuous tasks include: (a) Superconductor (Super C)[5]... (b) Ant Morphology (Ant)[1, 14]... (c) D Kitty Morphology (D Kitty)[1, 15]... (d) Hopper Controller (Hopper)[1]... Additionally, our discrete tasks include: (e) TF Bind 8 (TF8)[6]... (f) TF Bind 10 (TF10)[6]... (g) NAS [16]...
Dataset Splits No The paper describes using an 'offline dataset' and selecting designs from it, but it does not specify explicit train/validation/test splits with percentages or sample counts for the original dataset.
Hardware Specification Yes All experiments are run on a single NVIDIA GeForce RTX 3090 GPU.
Software Dependencies No The paper mentions using the 'Adam optimizer [46]' and implicitly deep learning frameworks, but does not specify software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9').
Experiment Setup Yes The number of iterations, T, is set to 200 for continuous tasks and 100 for discrete tasks. ... The learning rates are set at 1e 3 and 1e 1 for continuous tasks and discrete tasks, respectively. ... with a learning rate 2e 1 for continuous tasks and 3e 1 for discrete tasks, respectively.