Convex Total Least Squares

Authors: Dmitry Malioutov, Nikolai Slavov

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show both theoretically and experimentally, that while the plain nuclear norm relaxation incurs large approximation errors for STLS, the re-weighted nuclear norm approach is very effective, and achieves better accuracy on challenging STLS problems than popular non-convex solvers. Our first experiment considers plain TLS, where we know the optimal solution via the SVD. We evaluate the accuracy of the nuclear norm (NN) and two flavors of the reweighted nuclear norm algorithm: the full adaptive one described in Section 2.1, which we will refer to as (RW-NN), and the simplified approach with fixed α as described in Section 4 (log-det).
Researcher Affiliation Collaboration Dmitry Malioutov DMALIOUTOV@US.IBM.COM IBM Research, 1101 Kitchawan Road, Yorktown Heights, NY 10598 USA Nikolai Slavov NSLAVOV@ALUM.MIT.EDU Departments of Physics and Biology, MIT, 77 Massachusetts Avenue, Cambridge, MA 02139, USA
Pseudocode Yes Algorithm 1 ALM for weighted NN-STLS Input: A, W1, W2, α repeat Update D via soft-thresholding: Dk+1 = Sµ 1 k W1AW2 1/µkΛk 2 . Update E as in (15). Solve Sylvester system for A in (19). Update Λk+1 1 = Λk 1 + µk( A A E), Λk+1 2 = Λk 2 + µk(D W1AW2) and µk µk+1. until convergence
Open Source Code No The paper does not provide concrete access to open-source code for the methodology described. It does not include a specific repository link or an explicit code release statement. It mentions 'A faster algorithm that avoids the need for a binary search will be presented in a future publication.', which implies future work rather than current release.
Open Datasets No The paper mentions 'We simulate random i.i.d. Gaussian matrices A of size N N' and 'We use experimentally measured levels of 14 genes... across 6 exponentially growing yeast cultures'. Neither of these explicitly provides concrete access information (specific link, DOI, repository name, formal citation with authors/year) for a publicly available or open dataset.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning for training, validation, or testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiments.
Experiment Setup Yes We simulate random i.i.d. Gaussian matrices A of size N N, use a maximum of 3 re-weightings in RW-NN, and update the ALM parameter µk as µk = 1.05k.