Efficient Representation of Low-Dimensional Manifolds using Deep Networks

Authors: Ronen Basri, David W. Jacobs

ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments demonstrate that training with stochastic gradient descent can indeed find efficient representations similar to the one presented in this paper.
Researcher Affiliation Academia Ronen Basri Dept. of Computer Science and Applied Math Weizmann Institute of Science Rehovot, 76100 Israel ronen.basri@weizmann.co.il David W. Jacobs Dept. of Computer Science University of Maryland College Park, MD djacobs@cs.umd.edu
Pseudocode No The paper describes the construction and analysis in prose and mathematical notation but does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access to source code for the methodology described, such as a specific repository link, explicit code release statement, or code in supplementary materials.
Open Datasets No The paper mentions generating synthetic data and using face images, but it does not provide concrete access information (specific link, DOI, repository name, formal citation, or reference to established benchmark datasets) for a publicly available or open dataset.
Dataset Splits No The paper mentions using 'validation points' and 'validation error', but it does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning.
Hardware Specification No The paper mentions training models but does not provide specific hardware details (exact GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment.
Experiment Setup No The paper mentions training with 'stochastic gradient descent' but does not contain specific experimental setup details such as concrete hyperparameter values, training configurations, or system-level settings in the main text.