Learning to Design RNA

Authors: Frederic Runge, Danny Stoll, Stefan Falkner, Frank Hutter

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive empirical results on two widely-used RNA Design benchmarks, as well as a third one that we introduce, show that our approach achieves new state-of-the-art performance on the former while also being orders of magnitudes faster in reaching the previous state-of-the-art performance. In an ablation study, we analyze the importance of our method s different components.
Researcher Affiliation Collaboration Frederic Runge1 , Danny Stoll1 , Stefan Falkner1,2 & Frank Hutter1 1Department of Computer Science, University of Freiburg 2Bosch Center for Artificial Intelligence, Robert Bosch Gmb H {runget,stolld,sfalkner,fh}@cs.uni-freiburg.de
Pseudocode Yes Pseudocode for computing RT ω(φ) can be found in Appendix A.
Open Source Code Yes Code and data for reproducing our results is available at https://github.com/automl/learna.
Open Datasets Yes Since validation in RNA Design literature is often done using undisclosed data sources... we introduce a new benchmark dataset with an explicit training, validation and test split.
Dataset Splits Yes we introduce a new benchmark dataset with an explicit training, validation and test split.
Hardware Specification Yes All computations were done on Broadwell E5-2630v4 2.2 GHz CPUs with a limitation of 5 GByte RAM per each of the 10 cores.
Software Dependencies Yes We used the implementation of the Zuker algorithm provided by Vienna RNA (Lorenz et al., 2011b) versions 2.4.8 (MCTS-RNA, RL-LS and LEARNA), 2.1.9 (anta RNA) and 2.4.9 (RNAInverse). Our implementation uses the reinforcement learning library tensorforce, version 0.3.3 (Schaarschmidt et al., 2017) working with Tensor Flow version 1.4.0 (Abadi et al., 2015).
Experiment Setup Yes Our search space has three components described in the following: choices about the policy network s architecture, environment parameters (including the representation of the state and the reward), and training hyperparameters. The complete list of parameters, their types, ranges, and the priors we used over them can be found in Appendix E.