Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Multi-Objective Online Learning

Authors: Jiyan Jiang, Wenpeng Zhang, Shiji Zhou, Lihong Gu, Xiaodong Zeng, Wenwu Zhu

ICLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on several real-world datasets verify the effectiveness of the proposed algorithm.
Researcher Affiliation Collaboration Jiyan Jiang Tsinghua University Beijing, China EMAIL Wenpeng Zhang Ant Group Beijing, China EMAIL Shiji Zhou Tsinghua University Beijing, China EMAIL Lihong Gu, Xiaodong Zeng Ant Group Hangzhou, China EMAIL Wenwu Zhu Tsinghua University Beijing, China EMAIL
Pseudocode Yes Algorithm 1 Doubly Regularized Online Mirror Multiple Descent (DR-OMMD)
Open Source Code No The paper does not provide any explicit statements or links indicating that the source code for the described methodology is open-source or publicly available.
Open Datasets Yes We use two large-scale online benchmark datasets. (i) protein is a bioinformatics dataset for protein type classification (Wang, 2002), which has 17 thousand instances with 357 features. (ii) covtype is a biological dataset collected from a non-stationary environment for forest cover type prediction (Blackard & Dean, 1999), which has 50 thousand instances with 54 features. We use Multi MNIST (Sabour et al., 2017)
Dataset Splits No The paper does not provide specific numerical train/validation/test dataset splits. It states: 'In the online setting, samples arrive in a sequential manner, which is different from offline experiments where sample batches are randomly sampled from the training set.'
Hardware Specification Yes All runs are deployed on Xeon(R) E5-2699 @ 2.2GHz.
Software Dependencies No The paper mentions software components like 'Le Net' and 'Adam' but does not specify version numbers for any software dependencies or libraries.
Experiment Setup Yes The learning rates are decided by a grid search over {0.1, 0.2, . . . , 3.0}. For DR-OMMD, the parameter αt is simply set as 0.1. For linearization, we examine different weights (0.25, 0.75), (0.5, 0.5), and (0.75, 0.25). Learning rates in all methods are selected via grid search over {0.0001, 0.001, 0.01, 0.1}. For DR-OMMD, αt is set according to Theorem 1, and the initial weights are simply set as λ0 = (0.5, 0.5).