Tight convex relaxations for sparse matrix factorization

Authors: Emile Richard, Guillaume R. Obozinski, Jean-Philippe Vert

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we report experimental results to assess the performance of sparse low-rank matrix estimation using different techniques. We start in Section 6.1 with simulations that confirm and illustrate the theoretical results on statistical dimension of Ωk,q and assess how they generalize to matrices with (k, q)-rank larger than 1. In Section 6.2 we compare several techniques for sparse PCA on simulated data.
Researcher Affiliation Academia Emile Richard Electrical Engineering Stanford University Guillaume Obozinski Universit e Paris-Est Ecole des Ponts Paris Tech Jean-Philippe Vert MINES Paris Tech Institut Curie
Pseudocode No The paper describes the 'active set algorithm' in Section 5, but it is presented as a textual description rather than a formally structured pseudocode block or algorithm figure.
Open Source Code No The paper does not contain an explicit statement about releasing the source code for the methodology described, nor does it provide a direct link to a code repository.
Open Datasets No The paper uses simulated data: 'The observed sample consists of n i.i.d. random vectors generated according to N(0, Σ + σ2Idp), where (k, k)-rank(Σ ) = 3. The matrix Σ is formed by adding 3 blocks of rank 1...'. There is no concrete access information for a publicly available or open dataset provided.
Dataset Splits Yes The hyperparameters were chosen by leaving one portion of the train data off (validation) and selecting the parameter which allows to build an estimator approximating the best the validation set s empirical covariance.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup No The paper mentions some experimental settings such as the noise level σ=0.8, number of variables p=200, and number of observed points n=80. It also states that hyperparameters were chosen using a validation set and mentions λ and µ in objective functions, but it does not provide specific concrete values for these or other common hyperparameters like learning rate, batch size, or optimizer settings.