MDM: Molecular Diffusion Model for 3D Molecule Generation

Authors: Lei Huang, Hengtong Zhang, Tingyang Xu, Ka-Chun Wong

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on multiple benchmarks demonstrate that the proposed model significantly outperforms existing methods for both unconditional and conditional generation tasks.
Researcher Affiliation Collaboration Lei Huang1, 2*, Hengtong Zhang2 , Tingyang Xu2, Ka-Chun Wong1 1 City University of Hong Kong 2Tencent AI Lab
Pseudocode Yes Algorithm 1: Training Process Input: The molecular geometry G(A, R), VAE encoder ϕv global equivariant neural networks ϕg, local neural networks ϕl
Open Source Code Yes The codes are available at https://github.com/tencent-ailab/MDM
Open Datasets Yes We adopt QM9 (Ramakrishnan et al. 2014) and GEOM-Drugs (Axelrod and Gomez-Bombarelli 2022) to evaluate the performance of MDM.
Dataset Splits No The paper mentions splitting the QM9 training set for classifier training and generative model training but does not provide general train/validation/test splits for the main experiments.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., libraries, frameworks).
Experiment Setup No The paper does not provide specific experimental setup details such as hyperparameters (e.g., learning rate, batch size, number of epochs) or detailed training configurations for MDM.