Learning Deep ℓ0Encoders
Authors: Zhangyang Wang, Qing Ling, Thomas Huang
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical results demonstrate the impressive performances of the proposed encoders. |
| Researcher Affiliation | Academia | Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA Department of Automation, University of Science and Technology of China, Hefei, 230027, China |
| Pseudocode | No | The paper includes mathematical equations and block diagrams but no explicitly labeled 'Pseudocode' or 'Algorithm' sections, nor any structured code-like procedures. |
| Open Source Code | No | The paper does not provide any specific links, statements, or mentions of source code availability for the described methodology. |
| Open Datasets | Yes | The first 60,000 samples of the MNIST dataset are used for training and the last 10,000 for testing. ... We evaluate our methods on the MNIST dataset, and the AVIRIS Indiana Pines hyperspectral image dataset (see (Wang, Nasrabadi, and Huang 2015) for details). ... For clustering, we evaluate our methods on the COIL 20 and the CMU PIE dataset (Sim, Baker, and Bsat 2002). |
| Dataset Splits | No | The paper specifies 'The first 60,000 samples of the MNIST dataset are used for training and the last 10,000 for testing,' but does not explicitly mention a validation split, its size, or how it was used. |
| Hardware Specification | Yes | In practice, given that the model is well initialized, the training takes approximately 1 hour on the MNIST dataset, on a workstation with 12 Intel Xeon 2.67GHz CPUs and 1 GTX680 GPU. |
| Software Dependencies | No | The paper states 'implemented with the CUDA Conv Net package (Krizhevsky, Sutskever, and Hinton 2012)' but does not provide a specific version number for this package or any other software dependencies. |
| Experiment Setup | Yes | We use a constant learning rate of 0.01 with no momentum, and a batch size of 128. |