Input-Output Equivalence of Unitary and Contractive RNNs

Authors: Melikasadat Emami, Mojtaba Sahraee Ardakan, Sundeep Rangan, Alyson K. Fletcher

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The theoretical results are supported by experiments on modeling of slowly-varying dynamical systems.
Researcher Affiliation Academia Melikasadat Emami Dept. ECE UCLA emami@ucla.edu; Mojtaba Sahraee-Ardakan Dept. ECE UCLA msahraee@ucla.edu; Sundeep Rangan Dept. ECE NYU srangan@nyu.edu; Alyson K. Fletcher Dept. Statistics UCLA akfletcher@ucla.edu
Pseudocode No The paper does not include a dedicated section or figure labeled 'Pseudocode' or 'Algorithm'.
Open Source Code No The paper does not provide any statement or link indicating the availability of open-source code for the described methodology.
Open Datasets No The paper describes how they generated their own synthetic dataset ('we generate data from multiple instances of a synthetic RNN'), rather than using an existing publicly available or open dataset with access information. No link or citation to a public dataset is provided.
Dataset Splits No The paper mentions generating '700 training samples and 300 test sequences' but does not specify a validation dataset split.
Hardware Specification No The paper does not provide specific details about the hardware used, such as CPU/GPU models, memory, or cloud computing instance types.
Software Dependencies No The paper states 'All models are implemented in the Keras package in Tensorflow.' However, it does not specify version numbers for Keras or TensorFlow.
Experiment Setup Yes The hidden states in the model are varied in the range n = [2, 4, 6, 8, 10, 12, 14]... We used mean-squared error as the loss function. Optimization is performed using Adam [15] optimization with a batch size = 10 and learning rate = 0.01.