FlowMM: Generating Materials with Riemannian Flow Matching
Authors: Benjamin Kurt Miller, Ricky T. Q. Chen, Anuroop Sriram, Brandon M Wood
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In addition to standard benchmarks, we validate Flow MM s generated structures with quantum chemistry calculations, demonstrating that it is 3x more efficient, in terms of integration steps, at finding stable materials compared to previous open methods. |
| Researcher Affiliation | Collaboration | 1University of Amsterdam 2FAIR, Meta AI. |
| Pseudocode | No | The paper describes the neural network architecture and mathematical formulations but does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | We made our code publicly available on Git Hub1. 1https://github.com/facebookresearch/flowmm |
| Open Datasets | Yes | We consider two realistic datasets: MP-20, containing all materials with a maximum of 20 atoms per unit cell and within 0.08 e V/atom of the convex hull in the Materials Project database from around July 2021 (Jain et al., 2013), and MPTS-52, a challenging dataset containing structures with up to 52 atoms per unit cell and separated into time slices where the training, validation, and test sets are organized chronologically by earliest published year in literature (Baird et al., 2024). |
| Dataset Splits | Yes | All datasets are divided into 60% training data, 20% validation data, 20% test data. We use the same splits as Xie et al. (2021) and Jiao et al. (2023). |
| Hardware Specification | No | The paper mentions 'Meta provided the compute' in the acknowledgements, but it does not specify any particular GPU models, CPU models, or other detailed hardware specifications used for the experiments. |
| Software Dependencies | No | The paper mentions software like 'Vienna ab initio simulation package (VASP)', 'CHGNet', and 'Adam W optimizer' but does not provide specific version numbers for these or other software components. |
| Experiment Setup | Yes | We provide general and network hyperparameters in Table 5 and Table 6. We swept learning rate {0.001, 0.0003}, weight decay {0.003, 0.001, 0.0}, gradient clipping = 0.5, λl = 1, λf {100, 200, 300, 400, 500}. |