Conditional Synthesis of 3D Molecules with Time Correction Sampler

Authors: Hojung Jung, Youngrok Park, Laura Schmid, Jaehyeong Jo, Dongkyu Lee, Bongsang Kim, Se-Young Yun, Jinwoo Shin

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present comprehensive experiments to evaluate the performance of TACS and demonstrate its effectiveness in generating 3D molecular structures with specific properties while maintaining stability and validity. In Section 5.1, we present synthetic experiment with H+ 3 molecules, where the ground state energies are computed using the variational quantum eigensolver (VQE). In Section 5.2, we assess our method using QM9, a standard dataset in quantum chemistry that includes molecular properties and atom coordinates. We compare our approach against several state-of-the-art baselines and provide a detailed analysis of the results.
Researcher Affiliation Collaboration KAIST AI1 LG Electronics2
Pseudocode Yes Algorithm 1 Time-Aware Conditional Synthesis (TACS)
Open Source Code No Justification: We provide experimental details and will provide code after it is polished.
Open Datasets Yes Dataset We evaluate our method on QM9 dataset [45], which contains about 134k molecules with up to 9 heavy atoms of (C, N, O, F), each labeled with 12 quantum chemical properties. Following previous works [1, 23], we test on 6 types of quantum chemical properties and split the dataset into 100k/18k/13k molecules for training, validation, and test.
Dataset Splits Yes Following previous works [1, 23], we test on 6 types of quantum chemical properties and split the dataset into 100k/18k/13k molecules for training, validation, and test.
Hardware Specification Yes Finally, we train the time predictor within 24 hours with 4 NVIDIA A6000 GPUs.
Software Dependencies No The paper mentions "Rd Kit [32]" for evaluation metrics but does not provide specific version numbers for it or any other software dependencies like programming languages, machine learning frameworks, or numerical libraries.
Experiment Setup Yes The diffusion model is trained for 2000 epochs with a batch size of 64, learning rate of 0.0001, Adam optimizer, and an exponential moving average (EMA) with a decay rate of 0.9999.