Guided Trajectory Generation with Diffusion Models for Offline Model-based Optimization
Authors: Taeyoung Yun, Sujin Yun, Jaewoo Lee, Jinkyoo Park
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiment results demonstrate that our method outperforms competitive baselines on Design-Bench and its practical variants. |
| Researcher Affiliation | Collaboration | Taeyoung Yun1 Sujin Yun1 Jaewoo Lee1 Jinkyoo Park1,2 1Korea Advanced Institute of Science and Technology (KAIST) 2Omelet.ai |
| Pseudocode | Yes | Algorithm 1 Trajectory construction procedure of GTG |
| Open Source Code | Yes | The code is publicly available in https://github.com/dbsxodud-11/GTG. |
| Open Datasets | Yes | We empirically demonstrate that our method achieves superior performance on Design-Bench, a well-known benchmark for MBO with a variety of real-world tasks. ... Design-Bench [5] is the most widely used benchmark for evaluating MBO algorithms. |
| Dataset Splits | No | The paper mentions training models on the offline dataset and using sparse/noisy variants for evaluation but does not explicitly provide training, validation, or test dataset splits with percentages or sample counts. |
| Hardware Specification | Yes | All training is done with a single NVIDIA RTX 3090 GPU and takes approximately 30 minutes for discrete tasks and 2 hours for continuous tasks. |
| Software Dependencies | No | The paper mentions using "Adam optimizer [40]" and "temporal U-Net architecture from Diffuser [18]" but does not specify version numbers for these or other software libraries/frameworks like Python or PyTorch. |
| Experiment Setup | Yes | The hyperparameters we used for modeling and training are listed in Table 10. (Hyperparameters for Training Diffusion Models) and Table 11. (Hyperparameters for Training Proxy). |