Split LBI: An Iterative Regularization Path with Structural Sparsity

Authors: Chendi Huang, Xinwei Sun, Jiechao Xiong, Yuan Yao

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The utility and benefit of the algorithm are illustrated by applications on both traditional image denoising and a novel example on partial order ranking. Example 1. Consider two problems: standard Lasso and 1-D fused Lasso. In both cases, set n = p = 50, and generate X Rn p denoting n i.i.d. samples from N(0, Ip), ϵ N(0, In), y = Xβ + ϵ. Table 1: Mean AUC (with standard deviation) comparisons where Split LBI (1.4) beats genlasso. Left is for the standard Lasso. Right is for the 1-D fused Lasso in Example 1. Figure 2: Left is image denoising results by Split LBI. Right shows the AUC of Split LBI (blue solid line) increases and exceeds that of genlasso (dashed red line) as ν increases.
Researcher Affiliation Academia 1Peking University, 2Hong Kong University of Science and Technology
Pseudocode No The iterative algorithm is described by equations (1.4a), (1.4b), (1.4c) but is not formatted as a structured pseudocode block or explicitly labeled as 'Algorithm'.
Open Source Code No The paper mentions that the 'R package genlasso can be found in CRAN repository' in relation to a comparative method, but there is no statement or link indicating that the authors' own code for Split LBI is open-source or publicly available.
Open Datasets No For Example 1, data is synthetically generated: 'set n = p = 50, and generate X Rn p denoting n i.i.d. samples from N(0, Ip), ϵ N(0, In), y = Xβ + ϵ.'. For image denoising, 'The original image is resized to 50 50... Some noise is added'. For partial order ranking, data was 'collected n = 134 pairwise comparison game results... from various important championship', but no information about public access (link, citation, repository) is provided for this collected data.
Dataset Splits No The paper mentions conducting '100 independent experiments' and calculating 'mean AUC' but does not specify a train/validation/test split for a persistent dataset. The experiments appear to involve generating new data for each run or using collected data without a traditional split for model development and evaluation stages.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to conduct the experiments.
Software Dependencies No The paper mentions the 'R package genlasso' as a comparative tool but does not specify its version. It does not list any other software dependencies with version numbers for their own implementation.
Experiment Setup Yes Parameter κ should be large enough according to (2.12). Moreover, step size α should be small enough to ensure the stability of Split LBI. When ν, κ are determined, α can actually be determined by α = ν/(κ(1 + νΛ2 X + Λ2 D)) (see (C.6) in Supplementary Information). For Example 1: 'κ = 200 and ν {1, 5, 10}'. For Image Denoising: 'Set ν = 180, κ = 100. ... Here ν {1, 20, 40, 60, . . . , 300}'. For Partial Order Ranking: 'ν = 1 and κ = 100'.