LagNet: Deep Lagrangian Mechanics for Plug-and-Play Molecular Representation Learning

Authors: Chunyan Li, Junfeng Yao, Jinsong Su, Zhaoyang Liu, Xiangxiang Zeng, Chenxi Huang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that Lag Net can well learn 3D molecular structure features, and outperforms previous state-of-the-art baselines related molecular representation by a significant margin.
Researcher Affiliation Academia 1 School of Informatics, Xiamen University, Xiamen, China 2 School of Informatics, Yunnan Normal University, Kunming, China 3 School of Information and Control Engineering, China University of Mining and Technology, Xuzhou, China 4 College of Computer Science and Electronic Engineering, Hunan University, Changsha, China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets Yes To evaluate the performance of Lag Net with the existing molecular representation learning baselines, we have conducted extensive experiments using different datasets recommended by Molecule Net (Wu et al. 2018) including a quantum mechanics dataset (QM9), two physiology datasets (Tox21, BBBP) for classification tasks and two physical chemistry datasets (Lipophilicity and Free Solv) for regression tasks. QM9 (Ramakrishnan et al. 2014)... Tox21 (Rossoshek 2014)... BBBP (Martins et al. 2012)... Lipophilicity (Wenlock and Tomkinson 2015)... Free Solv (Mobley and Guthrie 2014)... ESOL (Delaney 2004).
Dataset Splits Yes All datasets are split into training, validation and testing set with a ratio of 0.8, 0.1, 0.1 , respectively.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions 'Pytorch framework' and 'Adam' as the optimizer but does not specify their version numbers or any other software dependencies with version information.
Experiment Setup Yes The learning rate is set to 0.0001 with decay rate 0.00004 during training Lag Net. The dropout and batch size are set to 0.2, 16, respectively. It should be noted that the time step size t in our experiment is set to 0.025.