Conservativeness of Untied Auto-Encoders
Authors: Daniel Im, Mohamed Belghazi, Roland Memisevic
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To this end, we train an untied auto-encoder with 500 hidden units with and without weight length constraints4 on the MNIST dataset. We measure symmetricity using sym(A) = (A+AT )/2 2 A 2 which yields values between [0, 1] with 1 representing complete symmetricity. Figure 2a and 2c shows the evolution of the symmetricity of r(x) x during training. |
| Researcher Affiliation | Academia | Daniel Jiwoong Im Montreal Institute for Learning Algorithms University of Montreal Montreal, QC, H3C 3J7 imdaniel@iro.umontreal.ca Mohamed Ishmael Belghazi HEC Montreal 3000 Ch de la Cte-Ste-Catherine Montreal, QC, H3T 2A7 mohamed.2.belghazi@hec.ca Roland Memisevic Montreal Institute for Learning Algorithms University of Montreal Montreal, QC, H3C 3J7 roland.memisevic@umontreal.ca |
| Pseudocode | Yes | Algorithm 1 Learning to approximate a conservative field with an auto-encoder |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code for the described methodology. |
| Open Datasets | Yes | To this end, we train an untied auto-encoder with 500 hidden units with and without weight length constraints4 on the MNIST dataset. |
| Dataset Splits | No | The paper refers to 'training data' and mentions using the MNIST dataset, but it does not specify explicit training, validation, and test splits (e.g., percentages, sample counts, or predefined standard splits). |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers used to replicate the experiments. |
| Experiment Setup | Yes | We train an untied auto-encoder with 500 hidden units... We train an untied auto-encoder with 1000 Re LU units for 500 epochs using BFGS over an equally spaced grid of 100 points in each dimension. |