Online Optimization for Max-Norm Regularization

Authors: Jie Shen, Huan Xu, Ping Li

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we report some simulation results on synthetic data to demonstrate the effectiveness and robustness of our online max-norm regularized matrix decomposition (OMRMD) algorithm.
Researcher Affiliation Academia Jie Shen Dept. of Computer Science Rutgers University Piscataway, NJ 08854 js2007@rutgers.edu Huan Xu Dept. of Mech. Engineering National Univ. of Singapore Singapore 117575 mpexuh@nus.edu.sg Ping Li Dept. of Statistics Dept. of Computer Science Rutgers University pingli@stat.rutgers.edu
Pseudocode Yes Algorithm 1 Online Max-Norm Regularized Matrix Decomposition
Open Source Code No The paper does not provide concrete access to source code for the methodology described, nor does it explicitly state that the code is available.
Open Datasets No The simulation data are generated by following a similar procedure in [6]. The paper describes a data generation procedure for synthetic data, but does not provide access information (link, DOI, repository, or explicit public dataset name with attribution) to a pre-existing publicly available dataset.
Dataset Splits No The paper mentions 'total number of samples n = 5000' but does not provide specific dataset split information (exact percentages, sample counts, or a detailed splitting methodology) needed to reproduce the data partitioning into train, validation, or test sets.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes We set the ambient dimension p = 400 and the total number of samples n = 5000 unless otherwise specified. We fix the tunable parameter λ1 = λ2 = 1/ p, and use default parameters for all baseline algorithms we compare with.