EBMDock: Neural Probabilistic Protein-Protein Docking via a Differentiable Energy Model
Authors: Huaijin Wu, Wei Liu, Yatao Bian, Jiaxiang Wu, Nianzu Yang, Junchi Yan
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results demonstrate that our approach achieves superior performance on two benchmark datasets DIPS and DB5.5. Furthermore, the results suggest EBMDock can serve as an orthogonal enhancement to existing methods. |
| Researcher Affiliation | Collaboration | Huaijin Wu1 , Wei Liu1 , Yatao Bian2, Jiaxiang Wu23, Nianzu Yang1, Junchi Yan1 1Department of Computer Science and Engineering, Shanghai Jiao Tong University 2Tencent AI Lab 3XVERSE, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures). |
| Open Source Code | Yes | https://github.com/wuhuaijin/EBMDock |
| Open Datasets | Yes | We conduct numerical experiments on two datasets: Database of Interacting Protein Structures (DIPS) (Townshend et al., 2019) and Docking Benchmarks 5.5 (DB5.5) (Vreven et al., 2015). |
| Dataset Splits | Yes | Table 2: Statistics of two datasets for experiments. #train / #valid / #test 9,093 / 1,146 / 1,153 / / 253 |
| Hardware Specification | Yes | All deep learning-based methods run on a machine with i9-10920X CPU, RTX 3090 GPU, and 128G RAM. |
| Software Dependencies | No | The paper mentions using Adam as an optimizer but does not specify version numbers for any software dependencies like programming languages or libraries. |
| Experiment Setup | Yes | As for the training loss Ltotal, we set λ1 and λ2 as 0.1 and 50. When computing the contrastive divergence Le, we randomly sample three groups of (R0 i , t0 i ), 50 steps of Langevin dynamics are then applied to them independently and the average of three energies serves as the second term in Eq. 13 to ensure the stability of training. We use Adam with learning rate 3e-4 and weight decay 1e-4 as the optimizer. More implementation details can be found in Appendix F. Table S10: Hyperparameter choices of EBMDock and the training phase settings. |