The limits of squared Euclidean distance regularization

Authors: Michal Derezinski, Manfred K. Warmuth

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show experimentally, that this distance is q n on average.
Researcher Affiliation Academia Michał Derezi nski Computer Science Department University of California, Santa Cruz CA 95064, U.S.A. mderezin@soe.ucsc.edu Manfred K. Warmuth Computer Science Department University of California, Santa Cruz CA 96064, U.S.A. manfred@cse.ucsc.edu
Pseudocode No No pseudocode or clearly labeled algorithm blocks were found in the paper.
Open Source Code No No statement regarding the release of open-source code or links to a code repository for the methodology described was found in the paper.
Open Datasets No The paper defines a synthetic problem matrix M ("We define a set of simple linear learning problems described by an n dimensional square matrix M with {−1, 1} entries.") but does not provide access information (link, DOI, citation) for a publicly available dataset or the generated data.
Dataset Splits No The paper mentions "k training instances" and refers to "n-k test instances", but it does not specify explicit train/validation/test splits with percentages, sample counts, or citations to predefined splits required for reproducibility.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running experiments were mentioned in the paper.
Software Dependencies No The paper mentions algorithms like Gradient Descent and Exponentiated Gradient, but does not provide specific software names with version numbers (e.g., Python, PyTorch, TensorFlow versions) used in experiments.
Experiment Setup No The paper discusses concepts like "optimized learning rates" and "1-norm regularization" in the context of algorithm behavior, but does not provide specific hyperparameter values (e.g., actual learning rates, batch sizes, number of epochs) or detailed training configurations required for reproducibility of experiments.