Cross-View Contrastive Fusion for Enhanced Molecular Property Prediction

Authors: Yan Zheng, Song Wu, Junyu Lin, Yazhou Ren, Jing He, Xiaorong Pu, Lifang He

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on multiple benchmark datasets demonstrate the superiority of our Mol Fuse.
Researcher Affiliation Academia 1School of Computer Science and Engineering,University of Electronic Science and Technology of China 2Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China 3Nuffield Department of Clinical Neurosciences, University of Oxford 4Department of Computer Science and Engineering, Lehigh University
Pseudocode No The paper describes its proposed method using mathematical formulations and textual descriptions, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No All simulations are implemented using Py Torch 1.7.1, and the original code of this method will be provided later.
Open Datasets Yes Molecule Net [Wu et al., 2017] is a large scale benchmark for molecular machine learning. It curates multiple public datasets and establishes metrics for evaluation. We use seven of these datasets, BBBP, BACE, Tox21, Clin Tox, SIDER, Tox Cast, HIV, for this experiment. The statistics of these datasets are summarized in Table 2.
Dataset Splits Yes The dataset is usually split into training set, validation set and test set for benchmarking using random splitting... We used a splitting method called scaffold splitting... And in this experiment, we used the dataset splitting method recommended in Molecule Net.
Hardware Specification No The paper states 'All simulations are implemented using Py Torch 1.7.1,' but provides no specific details about the hardware (e.g., CPU, GPU models, memory) used for these simulations.
Software Dependencies Yes All simulations are implemented using Py Torch 1.7.1, and the original code of this method will be provided later.
Experiment Setup Yes each model was trained for up to 100 epochs, with the training procedure halting if there was no increase in the validation ROC-AUC over 15 consecutive epochs. A 1-layer Bi GRU is employed as the backbone to extract sequence features and two 5-layer graph isomorphism networks with edge features as the foundation for the graph view representation encoder. All modules undergo training using the Adam optimizer. [...] Crossentropy loss was implemented as the classification loss. [...] we investigate two hyperparameters that balance the loss components, specifically: L = Lsup + αLdua + βLcon. Additionally, we examine the sensitivity of the temperature coefficient τ in Eqs. (15) and (16).