FABind: Fast and Accurate Protein-Ligand Binding
Authors: Qizhi Pei, Kaiyuan Gao, Lijun Wu, Jinhua Zhu, Yingce Xia, Shufang Xie, Tao Qin, Kun He, Tie-Yan Liu, Rui Yan
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments on benchmark datasets, our proposed FABind demonstrates strong advantages in terms of effectiveness and efficiency compared to existing methods. |
| Researcher Affiliation | Collaboration | 1 Gaoling School of Artificial Intelligence, Renmin University of China 2 School of Computer Science and Technology, Huazhong University of Science and Technology 3 Microsoft Research AI4Science 4 School of Information Science and Technology, University of Science and Technology of China |
| Pseudocode | No | The paper does not include pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | Our code is available at Github3. 3https://github.com/Qizhi Pei/FABind |
| Open Datasets | Yes | We conduct experiments on the PDBbind v2020 dataset [29], which is from Protein Data Bank (PDB) [5] and contains 19,443 protein-ligand complex structures. |
| Dataset Splits | Yes | After the filtration, we used 17, 299 complexes that were recorded before 2019 for training purposes, and an additional 968 complexes from the same period for validation. For our testing phase, we utilized 363 complexes recorded after 2019. |
| Hardware Specification | Yes | FABind models are trained for approximately 500 epochs using the Adam W [30] optimizer on 8 NVIDIA V100 16GB GPU with batch size set to 3 on each GPU. |
| Software Dependencies | No | The paper mentions software like Torch Drug [52] and RDKit [25] but does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | For pocket prediction, we use M = 1 FABind layer with 128 hidden dimensions. The pocket radius is set to 20Å. As for docking, we use N = 4 FABind layers with 512 hidden dimensions. Each layer comprises three-step message passing modules (refer to Fig. 1). We perform a total of k = 8 iterations for structure refinement during the docking process. ...FABind models are trained for approximately 500 epochs using the Adam W [30] optimizer on 8 NVIDIA V100 16GB GPU with batch size set to 3 on each GPU. The learning rate is 5e 5, which is scheduled to warm up in the first 15 epochs and then decay linearly. |