Uncertainty-Aware Yield Prediction with Multimodal Molecular Features

Authors: Jiayuan Chen, Kehan Guo, Zhen Liu, Olexandr Isayev, Xiangliang Zhang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three datasets, including two high throughput experiment (HTE) datasets and one chemist-constructed Amide coupling reaction dataset, demonstrate that UAM outperforms the stateof-the-art methods.
Researcher Affiliation Academia 1The Ohio State University 2 Department of Computer Science and Engineering, University of Notre Dame 3Department of Chemistry, Carnegie Mellon University
Pseudocode No The paper describes the model architecture and its components with text and figures, but does not provide pseudocode or a clearly labeled algorithm block.
Open Source Code Yes The code and used datasets are available at https://github.com/jychen229/Multimodal-reaction-yieldprediction.
Open Datasets Yes We used Buchwald Hartwig dataset (Ahneman et al. 2018) and Suzuki Miyaura dataset (Perera et al. 2018)... Amide coupling reaction (ACR) dataset1. This is a recently launched large literature dataset, containing 41,239 amide coupling reactions extracted from Reaxys (Reaxys 2020). It is considerably more complex than the two HTE datasets. ... Available at https://github.com/isayevlab/amide reaction data
Dataset Splits Yes We adopted a train/valid/test split of 6/2/2 and employed early-stopping for avoid overfitting.
Hardware Specification Yes All experiments are executed on a single NVIDIA RTX3090 GPU.
Software Dependencies No Our model is implemented by Pytorch and optimized with Adam optimizer and cosine learning rate scheduler with warming up.
Experiment Setup Yes Our model is implemented by Pytorch and optimized with Adam optimizer and cosine learning rate scheduler with warming up. ... The expert assignment in Mo E is configured with t=1 and k=6. ... In the experiments on the ACR dataset, the late fusion module is designed with feature concatenation, and the Mo E is structured with two stacked layers.