Learning Hierarchical Protein Representations via Complete 3D Graph Networks
Authors: Limei Wang, Haoran Liu, Yi Liu, Jerry Kurtin, Shuiwang Ji
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that Pro Net outperforms recent methods on most datasets. |
| Researcher Affiliation | Academia | 1 Texas A&M University, College Station, TX 2 Florida State University, Tallahassee, FL |
| Pseudocode | No | The paper describes its model architecture using text and diagrams, but it does not include pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | Our code is publicly available as part of the DIG library (https://github.com/divelab/DIG). |
| Open Datasets | Yes | Fold Dataset. We use the same dataset as in Hou et al. (2018) and Hermosilla et al. (2021). ... Reaction Dataset. ...collected from PDB (Berman et al., 2000), and EC annotations for each protein are downloaded from the SIFTS database (Dana et al., 2019). ... LBA Dataset. ...curated from PDBbind (Wang et al., 2004; Liu et al., 2015). ... PPI Dataset. ...we use the Database of Interacting Protein Structures (DIPS) (Townshend et al., 2019) for training and make prediction on the Docking Benchmark 5 (DB5) (Vreven et al., 2015). |
| Dataset Splits | Yes | In this task, 12,312 proteins are used for training, 736 for validation, 718 for Fold, 1,254 for Superfamily, and 1,272 for Family. |
| Hardware Specification | Yes | All experiments are conducted on a single NVIDIA Ge Force RTX 2080 Ti 11GB GPU. |
| Software Dependencies | No | The paper mentions using 'Py Torch (Paszke et al., 2019) and Pytorch Geometric (Fey & Lenssen, 2019)' but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | The search space for model and training hyperparameters are listed in Table 8. ... Table 8 lists specific values or search spaces for: Number of layers, Hidden dim, Cutoff, Dropout, Epochs, Batch size, Learning rate, Learning rate decay factor, Learning rate decay epochs. |