MOFDiff: Coarse-grained Diffusion for Metal-Organic Framework Design
Authors: Xiang Fu, Tian Xie, Andrew Scott Rosen, Tommi S. Jaakkola, Jake Allen Smith
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We comprehensively evaluate our model s capability to generate valid and novel MOF structures and its effectiveness in designing outstanding MOF materials for carbon capture applications with molecular simulations. |
| Researcher Affiliation | Collaboration | 1MIT CSAIL 2Microsoft Research AI4Science 3Department of Materials Science and Engineering, UC Berkeley 4Materials Science Division, Lawrence Berkeley National Laboratory |
| Pseudocode | Yes | Algorithm 1 Optimize building block orientations for MOF assembly |
| Open Source Code | Yes | Code available at https://github.com/microsoft/MOFDiff. |
| Open Datasets | Yes | We train and evaluate our method on the BW-DB dataset, which contains 304k MOFs with less than 20 building blocks (as defined by the metal-oxo decomposition algorithm) from the 324k MOFs in Boyd et al. 2019. |
| Dataset Splits | Yes | We use 289k MOFs (95%) for training and the rest for validation. |
| Hardware Specification | No | The paper does not specify any particular GPU or CPU models, or other specific hardware configurations used for running experiments. |
| Software Dependencies | Yes | MOFid-v1.1.0, MOFChecker-v0.9.5, egulp-v1.0.0, RASPA2-v2.0.47, LAMMPS-2021-9-29, and Zeo++-v0.3 are used in our experiments. Neural network modules are implemented with Py Torch-v1.11.0 (Paszke et al., 2019), Pyg-v2.0.4 (Fey & Lenssen, 2019), and Lightning-v1.3.8 (Falcon & The Py Torch Lightning team, 2019) with CUDA 11.3. |
| Experiment Setup | Yes | In our experiments, we use 3 rounds: U = 3, with σ = [3, 1.65, 0.3] and k = [30, 16, 1]. We use the Adam optimizer (Kingma & Ba, 2015) to maximize the model-predicted CO2 working capacity for 5,000 steps with a learning rate of 0.0003. All hyperparameters are reported in Table 3. |