Deep Recurrent Neural Network-Based Identification of Precursor microRNAs

Authors: Seunghyun Park, Seonwoo Min, Hyun-Soo Choi, Sungroh Yoon

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our experiments with recent benchmarks, the proposed approach outperformed the compared state-of-the-art alternatives in terms of various performance metrics.
Researcher Affiliation Academia Seunghyun Park Electrical and Computer Engineering Seoul National University Seoul 08826, Korea School of Electrical Engineering Korea University Seoul 02841, Korea
Pseudocode Yes The pseudocode of our approach is available as Appendix A, in the supplementary material.
Open Source Code Yes The source code for the proposed method is available at https://github.com/eleventh83/deep Mi RGene.
Open Datasets Yes We used three public benchmark datasets [4] named human, cross-species, and new. The positive pre-mi RNA sequences in all three datasets were obtained from mi RBase [25] (release 18). For the negative training sets, we obtained noncoding RNAs other than pre-mi RNAs and exonic regions of protein-coding genes from NCBI (http://www.ncbi.nlm.nih.gov), f RNAdb [23], NONCODE [24], and sno RNA-LBME-db [26].
Dataset Splits Yes Using the remaining 90% of each dataset, we carried out five-fold cross-validation for training and model selection.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory, or processor types used for running the experiments.
Software Dependencies No The paper mentions tools like RNAfold and the Adam optimizer but does not specify version numbers for any software dependencies.
Experiment Setup Yes In the LSTM layers, a dropout parameter for input gates and another for recurrent connection were both set to 0.1. In the FC layers, we set the dropout parameter to 0.1. [...] The number of hidden nodes in the LSTM (d1, d2) and the FC (d3, d4) layers were determined by cross validation as d1 = 20, d2 = 10, d3 = 400, and d4 = 100. The mini-batch size and training epochs were set to 128 and 300 respectively.