Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
GLPocket: A Multi-Scale Representation Learning Approach for Protein Binding Site Prediction
Authors: Peiying Li, Yongchang Liu, Shikui Tu, Lei Xu
IJCAI 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that GLPocket improves by 0.5% 4% on DCA Top-n prediction compared with previous state-of-the-art methods on four datasets. Our code has been released in https://github.com/CMACH508/GLPocket. |
| Researcher Affiliation | Academia | 1Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China 2Guangdong Institute of Intelligence Science and Technology, Zhuhai, Guangdong, China EMAIL |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code has been released in https://github.com/CMACH508/GLPocket. |
| Open Datasets | Yes | We use sc PDB [Desaphy et al., 2015] as training set, COACH420, HOLO4k, PDBbind [Wang et al., 2005], SC6K as testing sets. |
| Dataset Splits | Yes | We divide the dataset into ten parts and use one of them as validation dataset. |
| Hardware Specification | Yes | GLPocket is implemented in Py Torch and trained for 30 epochs with a batch size of 12 on 3 A100 GPUs. |
| Software Dependencies | No | GLPocket is implemented in Py Torch and trained for 30 epochs with a batch size of 12 on 3 A100 GPUs. SGD optimizer was applied to train the model. The learning rate is set to 0.001 and remains the same. The binary cross entropy loss is employed to optimize our network. |
| Experiment Setup | Yes | GLPocket is implemented in Py Torch and trained for 30 epochs with a batch size of 12 on 3 A100 GPUs. SGD optimizer was applied to train the model. The learning rate is set to 0.001 and remains the same. The binary cross entropy loss is employed to optimize our network. |