Diffusion Models for Black-Box Optimization
Authors: Siddarth Krishnamoorthy, Satvik Mehul Mashkaria, Aditya Grover
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we conduct experiments on the Design-Bench benchmark (Trabucco et al., 2022) and show that DDOM achieves results competitive with state-of-the-art baselines. |
| Researcher Affiliation | Academia | 1Department of Computer Science, UCLA. Correspondence to: Siddarth Krishnamoorthy <siddarthk@cs.ucla.edu>. |
| Pseudocode | Yes | Algorithm 1 Denoising Diffusion Optimization Models |
| Open Source Code | Yes | Our implementation of DDOM can be found at https: //github.com/siddarthk97/ddom. |
| Open Datasets | Yes | Empirically, we test DDOM on the Design-Bench suite (Trabucco et al., 2022) of tasks for offline BBO. |
| Dataset Splits | No | The paper mentions using an 'offline dataset D' and evaluating on a 'budget of Q = 256 points' but does not specify explicit training, validation, and test splits with percentages or sample counts. |
| Hardware Specification | Yes | We train our model on one RTX A5000 GPU and report results averaged over 5 seeds. |
| Software Dependencies | No | The paper states: 'We build on top of Huang et al. (2021) implementation of score based diffusion models (linked here).' However, it does not specify version numbers for Python, PyTorch, or any other critical software libraries used for implementation. |
| Experiment Setup | Yes | We instantiate DDOM using a simple feedforward neural network with 2 hidden layers, width of 1024 and Re LU activation. We train using a fixed learning rate of 0.001 and batch size of 128. We set the minimum and maximum noise variance to be 0.01 and 2.0 respectively. We use the same value of γ = 2.0 across all experiments. ... We use a dropout probability of 0.15, i.e. 15% of the time the conditioning value is set to zero |