Chunk Dynamic Updating for Group Lasso with ODEs

Authors: Diyang Li, Bin Gu7408-7416

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results not only confirm the effectiveness of proposed algorithm for the chunk incremental and decremental learning, but also validate its efficiency compared to the existing offline and online algorithms.
Researcher Affiliation Academia Diyang Li1, Bin Gu2,1 1 School of Computer & Software, Nanjing University of Information Science & Technology, P.R.China 2 MBZUAI, United Arab Emirates
Pseudocode Yes Algorithm 1: Chunk Incremental (Decremental) Group Lasso
Open Source Code No The paper mentions implementing their algorithm but only provides links to the code for third-party optimizers (FISTA) and other existing online frameworks (DA-GL), not to their own CIGL/CDGL implementation.
Open Datasets Yes Correctness. To assess the validity of our derivation at first, we employ several well-known datasets, e.g., Boston houseprices (Harrison Jr and Rubinfeld 1978), to make direct comparisons of numerical solutions w0, w1 and w2, which on behalf of batch training on 5% samples (i.e., initial solution feed into algorithm), incremental training (i.e., add single sample each time) and batch training on 100% samples, respectively.
Dataset Splits Yes We select 25% samples in training set to perform chunk incremental updating. Moreover, we utilize the well-trained model to restore a degenerate model by chunk decremental updating for 25% samples in trained model. For online DA-GL, we train it by successive addition of single data to simulate chunk updating. Leave-one-out cross validation.
Hardware Specification No The paper does not provide any specific details regarding the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper mentions 'Python 3.7' for implementation, and notes the use of 'FISTA optimiser' and 'online learning framework for group Lasso in MATLAB code', but it does not provide version numbers for these key libraries or software beyond the Python interpreter itself.
Experiment Setup Yes The α with different values are chosen to justify the effectiveness of our algorithm under various sparsity patterns. For online DA-GL, we train it by successive addition of single data to simulate chunk updating. We select 25% samples in training set to perform chunk incremental updating. Specifically, we adopt a widely-used batch training algorithm of group Lasso using FISTA optimiser (Beck and Teboulle 2009) with a gradient-based adaptive restarting scheme.