Solving Separable Nonsmooth Problems Using Frank-Wolfe with Uniform Affine Approximations

Authors: Edward Cheung, Yuying Li

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate experimentally that this carefully selected linear minimization leads to significant improvement over a variety of matrix estimation problems, such as sparse covariance estimation, graph link prediction, and ℓ1-loss matrix completion. (Section 1, Introduction) and 5 Experimental Results (Section 5 header).
Researcher Affiliation Academia Edward Cheung and Yuying Li Cheriton School of Computer Science, University of Waterloo, Waterloo, Canada {eycheung, yuying}@uwaterloo.ca
Pseudocode Yes Algorithm 1 Frank-Wolfe with Uniform Approximations (FWUA) and Algorithm 2 update tau (Section 4).
Open Source Code No Due to space limitations, the proofs will be omitted but will be available here: https://arxiv.org/abs/1710.05776. (Section 3.1, footnote 1). There is no explicit statement or link for the code itself.
Open Datasets Yes We consider the Facebook dataset from [Leskovec and Krevl, 2014] which consists of a graph with 4,039 nodes and 88,234 edges... (Section 5.1) and The last experiment we consider is matrix completion with an ℓ1 loss function on the Movie Lens datasets3. 3https://grouplens.org/datasets/movielens/ (Section 5.2).
Dataset Splits No The paper mentions training and testing contexts (e.g., '50% of the entries are observed' for graph link prediction), but does not explicitly provide details about validation dataset splits or how they were used.
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running the experiments.
Software Dependencies No The paper mentions 'MATLAB notation' for describing matrix dimensions, but does not specify any software dependencies with version numbers for its implementation or experiments (e.g., programming languages, libraries, or solvers with specific versions).
Experiment Setup Yes For all experiments, the FW based methods terminate after 1000 iterations, and all other methods use default stopping criteria suggested by the authors. We compare FWUA with Gen FB [Richard et al., 2012] , HCGS [Argyriou et al., 2014], and SCCG [Pierucci et al., 2014]. For each problem instance, the same λ1 value is used by all methods and this value is tuned, by searching over a grid of parameter values, to yield the best test performance for Gen FB. The bound δ for the trace norm, is then set to the trace norm of the solution given by Gen FB. For SCCG, the smoothing parameter µ is additionally tuned to yield the smallest average objective value. HCGS sets β(k) = 1/√(k + 1) as suggeseted by the authors. (Section 5.1).