Tensor Switching Networks
Authors: Chuan-Yung Tsai, Andrew M. Saxe, Andrew M. Saxe, David Cox
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results demonstrate that the TS network is indeed more expressive and consistently learns faster than standard Re LU networks. |
| Researcher Affiliation | Academia | Chuan-Yung Tsai , Andrew Saxe , David Cox Center for Brain Science, Harvard University, Cambridge, MA 02138 {chuanyungtsai,asaxe,davidcox}@fas.harvard.edu |
| Pseudocode | No | The paper describes algorithms textually but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code and scripts for reproducing our experiments are available at https://github.com/coxlab/tsnet. |
| Open Datasets | Yes | We adopt 3 datasets, viz. MNIST, CIFAR10 and SVHN2 |
| Dataset Splits | Yes | We adopt 3 datasets, viz. MNIST, CIFAR10 and SVHN2, where we reserve the last 5,000 training images for validation. |
| Hardware Specification | No | The paper mentions 'multicore CPU acceleration' and 'GPU acceleration' but does not specify any particular hardware models (e.g., CPU or GPU types). |
| Software Dependencies | No | The paper mentions software like Matlab, libsvm-compact, Python, Numpy, and Keras, but no specific version numbers for any of these dependencies are provided. |
| Experiment Setup | Yes | For all MLPs and CNNs, we universally use SGD with learning rate 10 3, momentum 0.9, L2 weight decay 10 3 and batch size 128 to reduce the grid search complexity by focusing on architectural hyperparameters. |