Co-evolution Transformer for Protein Contact Prediction
Authors: He Zhang, Fusong Ju, Jianwei Zhu, Liang He, Bin Shao, Nanning Zheng, Tie-Yan Liu
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on two rigorous benchmark datasets demonstrate the effectiveness of Co T. |
| Researcher Affiliation | Collaboration | He Zhang Xi an Jiaotong University mao736488798@stu.xjtu.edu.cn Fusong Ju Microsoft Research Asia fusongju@microsoft.com |
| Pseudocode | No | No pseudocode or clearly labeled algorithm blocks were found in the paper. |
| Open Source Code | Yes | Our code will be released at https://github.com/microsoft/Protein Folding/tree/main/ coevolution_transformer. |
| Open Datasets | Yes | All models are trained on 96, 167 protein structures (chains) collected from PDB [37] (before Apr. 1, 2020) |
| Dataset Splits | Yes | All models are trained on 96, 167 protein structures (chains) collected from PDB [37] (before Apr. 1, 2020), which are split into the train and validation sets (95, 667 and 500 proteins, respectively). |
| Hardware Specification | Yes | The total training cost of the Co T model is about 30 hours on 4 Tesla V100 GPU cards. |
| Software Dependencies | No | The paper mentions 'Adam optimizer [38]' but does not provide specific software names with version numbers for reproducibility (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The Co T model is equipped with 6 Co T layers with hidden size as 128 and the attention head number as 8. All models are trained with Adam optimizer [38] via a cross-entropy loss for 100k iterations. The learning rate, the weight decay, and the batch size are set to 10 4, 0.01, and 16 respectively. |