Efficient Pareto Manifold Learning with Low-Rank Structure

Authors: Weiyu Chen, James Kwok

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results demonstrate that the proposed approach outperforms state-of-the-art baselines, especially on datasets with a large number of tasks.
Researcher Affiliation Academia Weiyu Chen 1 James T. Kwok 1 1Department of Computer Science and Engineering, The Hong Kong University of Science and Technology. Correspondence to: Weiyu Chen <wchenbx@cse.ust.hk>.
Pseudocode Yes Algorithm 1 LORPMAN.
Open Source Code No The paper does not provide an explicit statement or link for open-source code availability for the described methodology.
Open Datasets Yes Multi MNIST (Sabour et al., 2017) is a digit classification dataset with two tasks: classification of the top-left digit and classification of the bottom-right digit in each image.
Dataset Splits Yes We tune the hyperparameters according to the HV value on the validation datasets.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes For LORPMAN, we choose the scaling factor s {1, 2, 4, 6} and freeze epoch {4, 6, 8} based on the validation set. For both datasets, the rank r for all layers is set to 8 and the orthogonal regularization coefficient λo is set to 1. The learning rate is set to 1e 3 and the batch size is set to 256.