Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

The limits of squared Euclidean distance regularization

Authors: Michal Derezinski, Manfred K. Warmuth

NeurIPS 2014 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show experimentally, that this distance is q n on average.
Researcher Affiliation Academia Michaล‚ Derezi nski Computer Science Department University of California, Santa Cruz CA 95064, U.S.A. EMAIL Manfred K. Warmuth Computer Science Department University of California, Santa Cruz CA 96064, U.S.A. EMAIL
Pseudocode No No pseudocode or clearly labeled algorithm blocks were found in the paper.
Open Source Code No No statement regarding the release of open-source code or links to a code repository for the methodology described was found in the paper.
Open Datasets No The paper defines a synthetic problem matrix M ("We de๏ฌne a set of simple linear learning problems described by an n dimensional square matrix M with {โˆ’1, 1} entries.") but does not provide access information (link, DOI, citation) for a publicly available dataset or the generated data.
Dataset Splits No The paper mentions "k training instances" and refers to "n-k test instances", but it does not specify explicit train/validation/test splits with percentages, sample counts, or citations to predefined splits required for reproducibility.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running experiments were mentioned in the paper.
Software Dependencies No The paper mentions algorithms like Gradient Descent and Exponentiated Gradient, but does not provide specific software names with version numbers (e.g., Python, PyTorch, TensorFlow versions) used in experiments.
Experiment Setup No The paper discusses concepts like "optimized learning rates" and "1-norm regularization" in the context of algorithm behavior, but does not provide specific hyperparameter values (e.g., actual learning rates, batch sizes, number of epochs) or detailed training configurations required for reproducibility of experiments.