A Semi-supervised Molecular Learning Framework for Activity Cliff Estimation

Authors: Fang Wu

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on 30 activity cliff datasets demonstrate that Semi Mol significantly enhances graphbased ML architectures and outpasses state-of-the-art pretraining and SSL baselines.
Researcher Affiliation Academia Fang Wu Stanford University & Westlake University fangwu97@stanford.edu
Pseudocode Yes Algorithm 1 Semi Mol Algorithm
Open Source Code No The paper mentions that the Molecule ACE benchmarking platform is 'available on Github https://github.com/molML/Molecule ACE', but it does not state that the code for their proposed method, Semi Mol, is open-source or provide a link to it.
Open Datasets Yes All our evaluations in this section are performed on datasets from Molecule ACE (Activity Cliff Estimation), which is an open-access benchmarking platform and available on Github https://github.com/molML/Molecule ACE.
Dataset Splits No Typically, D is divided into the training and validation sets, denoted as Dtrain = xtrain i , ytrain i N1 i=1 and Dval = xval i , yval i N2 i=1, respectively. More experimental details and dataset statistics are elaborated on Appendix.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper refers to various models and architectures (e.g., GIN, GMT, GNNs) but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup No The paper mentions that 'More experimental details and dataset statistics are elaborated on Appendix.' and does not provide specific hyperparameter values or detailed training configurations in the main text.