Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNNs
Authors: Li Jing, Yichen Shen, Tena Dubcek, John Peurifoy, Scott Skirlo, Yann LeCun, Max Tegmark, Marin Soljačić
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we test the performance of EUNNs on the standard copying task, the pixelpermuted MNIST digit recognition benchmark as well as the Speech Prediction Test (TIMIT). |
| Researcher Affiliation | Collaboration | 1Massachusetts Institute of Technology 2New York University, Facebook AI Research. |
| Pseudocode | Yes | Algorithm 1 Efficient implementation for F with parameter θi and φi. |
| Open Source Code | Yes | All models are implemented in both Tensorflow and Theano, available from https://github.com/ jingli9111/EUNN-tensorflow and https: //github.com/iguanaus/EUNN-theano. |
| Open Datasets | Yes | We use the TIMIT dataset (Garofolo et al., 1993) sampled at 8 k Hz. |
| Dataset Splits | Yes | We trained all five RNNs for T = 1000 with the same batch size 128 using RMSProp optimization with a learning rate of 0.001. The decay rate is set to 0.5 for EURNN, and 0.9 for all other models respectively. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for the experiments. |
| Software Dependencies | No | The paper mentions that models are implemented in 'Tensorflow' and 'Theano' but does not specify their version numbers. |
| Experiment Setup | Yes | We trained all five RNNs for T = 1000 with the same batch size 128 using RMSProp optimization with a learning rate of 0.001. The decay rate is set to 0.5 for EURNN, and 0.9 for all other models respectively. |