Re-Dock: Towards Flexible and Realistic Molecular Docking with Diffusion Bridge
Authors: Yufei Huang, Odin Zhang, Lirong Wu, Cheng Tan, Haitao Lin, Zhangyang Gao, Siyuan Li, Stan Z. Li
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments on designed benchmark datasets including apo-dock and crossdock demonstrate our model s superior effectiveness and efficiency over current methods. |
| Researcher Affiliation | Academia | 1Zhejiang University, Hangzhou 2AI Lab, Research Center for Industries of the Future, Westlake University 3University of Washington, Seattle. |
| Pseudocode | Yes | Algorithm 1 Learning diffusion generative models |
| Open Source Code | No | The paper does not provide an explicit statement about open-sourcing the code or a link to a code repository. |
| Open Datasets | Yes | We conduct training, flexible redocking, and sidechain pose prediction evaluation on the PDBBind v2020 dataset (Liu et al., 2017) with the time-based dataset split following previous works (Corso et al., 2023; Pei et al., 2023). |
| Dataset Splits | Yes | We run inference with 20 denoising steps on 500 validation complexes every 10 epochs and use the set of weights with the highest percentage of RMSDs less than 2 A as the final score model. |
| Hardware Specification | Yes | All baselines and our approach are implemented using the Py Torch 1.6.0 library with Intel(R) Xeon(R)Gold6240R@2.40GHz CPU and NVIDIA A100 GPU. |
| Software Dependencies | No | The paper mentions 'Py Torch 1.6.0 library' as a software dependency, but it does not list multiple key software components with their versions or name a self-contained solver with a specific version number as required. |
| Experiment Setup | Yes | We use Adam as optimizer with learning rate= 0.001 and exponential moving average of the weights during training, which we will use in inference. The batch size is 64. We run inference with 20 denoising steps on 500 validation complexes every 10 epochs... We train our score model for 600 epochs (around 7 days). |