Neural Edit Operations for Biological Sequences

Authors: Satoshi Koide, Keisuke Kawano, Takuro Kutsuna

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results for the protein secondary structure prediction task suggest the importance of insertion/deletion. The test accuracy on the widely-used CB513 dataset is 71.5%, which is 1.2-points better than the current best result on non-ensemble models.
Researcher Affiliation Industry Satoshi Koide Toyota Central R&D Labs. koide@mosk.tytlabs.co.jp Keisuke Kawano Toyota Central R&D Labs. kawano@mosk.tytlabs.co.jp Takuro Kutsuna Toyota Central R&D Labs. kutsuna@mosk.tytlabs.co.jp
Pseudocode Yes Algorithm 1: Differentiable Needleman Wunsch (forward): s NW(x1:m, y1:n; g) ... Algorithm 2: Calculation of Q (backward). ... Algorithm 3: Calculation of P.
Open Source Code No The paper does not provide an explicit statement about the release of source code or a link to a code repository for the described methodology.
Open Datasets Yes For the test, we used the widely-used CB513 dataset. For training, we used the filtered CB6133 dataset [13, 26]
Dataset Splits No The paper mentions using the CB513 dataset for testing and the filtered CB6133 dataset for training, but it does not explicitly specify a distinct validation set split or how such a split would be performed for reproducibility.
Hardware Specification No The paper states 'The training was conducted on Nvidia Tesla GPUs.' but does not specify a particular model or other hardware components (CPU, memory, etc.).
Software Dependencies Yes For implementations, we used Py Torch version 0.2.
Experiment Setup Yes We used the Adam optimizer with the minibatch size of 128, initial learning rate of 0.0002 (reduced by 1/10 at epoch 15), and weight decay (10 5). ... Throughout the experiments, we used RMSProp for optimization, with the initial learning rate of 0.00033 and minibatch size of 8. The models are trained for 150 epochs, and the test accuracy at the last epoch is reported. We do not use weight decay, and the learning rate is reduced by 1/10 at epoch 100. ... In our experiments, we randomly replaced 15% of the residues.