Equivariant Energy-Guided SDE for Inverse Molecular Design

Authors: Fan Bao, Min Zhao, Zhongkai Hao, Peiyao Li, Chongxuan Li, Jun Zhu

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, under the guidance of designed energy functions, EEGSDE significantly improves the baseline on QM9, in inverse molecular design targeted to quantum properties and molecular structures. Furthermore, EEGSDE is able to generate molecules with multiple target properties by combining the corresponding energy functions linearly.
Researcher Affiliation Academia 1Dept. of Comp. Sci. & Tech., Institute for AI, Tsinghua-Huawei Joint Center for AI BNRist Center, State Key Lab for Intell. Tech. & Sys., Tsinghua University, Beijing, China 2Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China 3Beijing Key Laboratory of Big Data Management and Analysis Methods , Beijing, China
Pseudocode Yes We present the Euler-Maruyama method as an example in Algorithm 1 at Appendix B.
Open Source Code Yes Our code is included in the supplementary material.
Open Datasets Yes We evaluate on QM9 (Ramakrishnan et al., 2014), which contains quantum properties and coordinates of 130k molecules with up to nine heavy atoms from (C, N, O, F).
Dataset Splits Yes Following EDM, we split QM9 into training, validation and test sets, which include 100K, 18K and 13K samples respectively.
Hardware Specification No The paper mentions "the High Performance Computing Center, Tsinghua University" in the acknowledgments but does not specify any particular hardware (e.g., CPU/GPU models, memory) used for the experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies used in the experiments (e.g., programming languages, libraries, frameworks).
Experiment Setup Yes For the noise prediction network, we use the same setting with EDM (Hoogeboom et al., 2022) for a fair comparison, where the models is trained 2000 epochs with a batch size of 64, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999. The EGNN used in the energy function has 192 hidden features and 7 layers. We train 2000 epochs with a batch size of 128, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999.