Spherical convolutions and their application in molecular modelling
Authors: Wouter Boomsma, Jes Frellsen
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | As a proof of concept, we conclude with an assessment of the performance of spherical convolutions in the context of molecular modelling, by considering structural environments within proteins. We show that the models are capable of learning non-trivial functions in these molecular environments, and that our spherical convolutions generally outperform standard 3D convolutions in this setting. |
| Researcher Affiliation | Academia | Wouter Boomsma Department of Computer Science University of Copenhagen wb@di.ku.dk Jes Frellsen Department of Computer Science IT University of Copenhagen jefr@itu.dk |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The spherical convolution Tensorflow code and the datasets used in this paper are available at https://github.com/deepfold. |
| Open Datasets | Yes | The spherical convolution Tensorflow code and the datasets used in this paper are available at https://github.com/deepfold. ... A large initial (nonhomology-reduced) data set was constructed using the PISCES server (Wang and Dunbrack, 2003). |
| Dataset Splits | Yes | This left us with 2336 proteins, out of which 1742 were used for training, 10 for validation, and the remainder was set aside for testing. |
| Hardware Specification | Yes | The models were trained on NVIDIA Titan X (Pascal) GPUs, using a batch size of 100 and a learning rate of 0.0001. |
| Software Dependencies | No | The paper mentions software like TensorFlow, Open MM framework, amber99sb force field, and Reduce program, but does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | We minimized the cross-entropy loss using Adam (Kingma and Ba, 2015), regularized by penalizing the loss with the sum of the L2 of all weights, using a multiplicative factor of 0.001. All dense layers also used dropout regularization with a probability of 0.5 of keeping a neuron. The models were trained on NVIDIA Titan X (Pascal) GPUs, using a batch size of 100 and a learning rate of 0.0001. |