Geometrically Aligned Transfer Encoder for Inductive Transfer in Regression Tasks

Authors: Sung Moon Ko, Sumin Lee, Dae-Woong Jeong, Woohyung Lim, Sehui Han

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To test our algorithms, we used a total of 14 datasets from three different open databases named Pub Chem(Kim et al., 2022), Ochem(Sushko et al., 2011), and CCCB(III, 2022)...For the evaluation, we compare the performance of GATE against that of single task learning (STL), MTL, KD, global structure preserving loss based KD (GSPKD) (Joshi et al., 2022), and transfer learning (retrain all or head network only).
Researcher Affiliation Industry Sung Moon Ko , Sumin Lee , Dae-Woong Jeong , Woohyung Lim, Sehui Han LG AI Research {sungmoon.ko, sumin.lee, dw.jeong, w.lim, hansse.han}@lgresearch.ai
Pseudocode Yes Algorithm 1 GATE
Open Source Code No The paper does not include an unambiguous statement that the authors are releasing the source code for the described methodology, nor does it provide a direct link to a code repository.
Open Datasets Yes To test our algorithms, we used a total of 14 datasets from three different open databases named Pub Chem(Kim et al., 2022), Ochem(Sushko et al., 2011), and CCCB(III, 2022), as described in Appendix Table 3.
Dataset Splits Yes The training and test split is an 80:20 ratio, and two sets of data were prepared: random split and scaffold-based split(Bemis & Murcko, 1996). Every experiment is tested in a fourfold cross-validation setting with uniform sampling for accurate evaluation
Hardware Specification Yes Every experiment is tested in a fourfold cross-validation setting with uniform sampling for accurate evaluation, and a single NVIDIA A40 is used for the experiments.
Software Dependencies No The paper mentions using Adam W for optimization and DMPNN as a backbone architecture. However, it does not specify version numbers for these or any other software components (e.g., Python, PyTorch, TensorFlow) required for reproducibility.
Experiment Setup Yes We trained 600 epochs with batch size 512 while using Adam W (Loshchilov & Hutter, 2017) for optimization with learning rate 5e-5. The hyperparameters for α, β, γ, δ are 1, 1, 1, 1 respectively.