Neural Universal Discrete Denoiser

Authors: Taesup Moon, Seonwoo Min, Byunghan Lee, Sungroh Yoon

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally show that our resulting algorithm, dubbed as Neural DUDE, significantly outperforms the previous state-of-the-art in several applications with a systematic rule of choosing the hyperparameter, which is an attractive feature in practice.
Researcher Affiliation Academia Taesup Moon DGIST Daegu, Korea 42988 tsmoon@dgist.ac.kr Seonwoo Min, Byunghan Lee, Sungroh Yoon Seoul National University Seoul, Korea 08826 {mswzeus, styxkr, sryoon}@snu.ac.kr
Pseudocode Yes Algorithm 1 summarizes the Neural DUDE algorithm.
Open Source Code No The paper mentions using 'Keras package (http://keras.io) with Theano [17] backend' but does not provide concrete access to their own source code implementation for the described methodology.
Open Datasets Yes For our experiment, we used simulated Min ION Nanopore reads... we obtained 16S r DNA reference sequences for 20 species [18]
Dataset Splits No The paper mentions mini-batch size and epochs for learning, but it does not provide specific details on training/validation/test dataset splits, such as percentages or sample counts for validation, or a clear cross-validation setup.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments, only mentioning the software environment.
Software Dependencies No All of our experiments were done with Python 2.7 and Keras package (http://keras.io) with Theano [17] backend. ...We used Adam [16] with default setting in Keras as an optimizer to minimize (7). While Python version is given, specific versions for Keras and Theano are not mentioned, which are key dependencies.
Experiment Setup Yes For Neural DUDE, we used the feed-forward fully connected neural networks for p(w, ) and varied the depth of the network between 1 4 while also varying k. Neural DUDE(1L) corresponds to the simple linear softmax regression model. For deeper models, we used 40 hidden nodes in each layer with Rectified Linear Unit (Re LU) activations. We used Adam [16] with default setting in Keras as an optimizer to minimize (7). We used the mini-batch size of 100 and ran 10 epochs for learning.