A2-Net: Molecular Structure Estimation from Cryo-EM Density Volumes
Authors: Kui Xu, Zhe Wang, Jianping Shi, Hongsheng Li, Qiangfeng Cliff Zhang1230-1237
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This framework achieves 91% coverage on our newly proposed dataset and takes only a few minutes for a typical structure with a thousand amino acids. Our method is hundreds of times faster and several times more accurate than existing automated solutions without any human intervention. |
| Researcher Affiliation | Collaboration | Kui Xu,1 Zhe Wang,2 Jianping Shi,2 Hongsheng Li,3 Qiangfeng Cliff Zhang1 1Tsinghua University 2Sense Time Research 3The Chinese University of Hong Kong |
| Pseudocode | No | No explicit pseudocode or algorithm blocks found. |
| Open Source Code | No | We will release a large scale, richly annotated dataset of protein density volumes, to facilitate research in this area. (This promises a dataset, not the source code for the method.) |
| Open Datasets | No | To the best of our knowledge, the A2 dataset is the first large-scale benchmark for learning automatic molecular structure determination. The dataset we collected and used in this study, named as the A2 dataset, includes 250K amino acid objects in 1,713 protein chains from 218 structures. (This describes the dataset and promises future release, but does not provide current concrete access information like a URL, DOI, or citation to an already published dataset.) |
| Dataset Splits | Yes | Following random selection, we obtained a split of 1250 training and 463 validation chains. |
| Hardware Specification | No | Limited by the GPU memory, we set the batch size to be 1. (These mention general GPU usage and a cluster, but lack specific hardware models or specifications for the authors' own experiments.) |
| Software Dependencies | No | We used Adam (Kingma and Ba 2015) optimizer to train the model, starting by a learning rate of 0.0001, a momentum of 0.9 and a weight decay of 0.0001. (It mentions the Adam optimizer but does not provide version numbers for any software dependencies like programming languages or libraries.) |
| Experiment Setup | Yes | We first trained the loc Net and pose Net individually for 100 epochs, and then fixed them while training the rec Net for 400 epochs. Finally, we jointly optimize the whole A2-Net with sequence-guided neighbor loss for another 400 epochs. We used Adam (Kingma and Ba 2015) optimizer to train the model, starting by a learning rate of 0.0001, a momentum of 0.9 and a weight decay of 0.0001. We fine-tune our models with Batch Norm. We found that Batch Norm may reduce over-fitting. For each density volume, we randomly cropped a 64 64 64 cube and send it into the network. Limited by the GPU memory, we set the batch size to be 1. |