Multi-Objective Deep Learning with Adaptive Reference Vectors

Authors: Weiyu Chen, James Kwok

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on an extensive set of learning scenarios demonstrate the superiority of the proposed algorithm over the state-of-the-art. 4 Experiments In this section, extensive experiments are performed, including synthetic problems (Section 4.1), multi-task learning (Section 4.2), accuracy-fairness trade-off (Section 4.3), and usage on larger networks (Section 4.4). Finally, ablation study is presented in Section 4.5.
Researcher Affiliation Academia Weiyu Chen James T. Kwok Department of Computer Science and Engineering The Hong Kong University of Science and Technology Hong Kong {wchenbx, jamesk}@cse.ust.hk
Pseudocode Yes Algorithm 1 Gradient-based Multi-Objective Optimization with Adaptive Reference vectors (GMOOAR).
Open Source Code No The paper does not contain an explicit statement about open-sourcing its code or provide a link to a code repository for the described methodology.
Open Datasets Yes In this experiment, we use three benchmark datasets from [31]: Multi-MNIST, Multi-Fashion, and Multi-Fashion+MNIST. [...] we aim to achieve both high accuracy and fairness on three tabular datasets: Adult [16], Compass [1], and Default [51]. [...] selected from the 40 tasks in Celeb A [35].
Dataset Splits Yes They are evaluated on the validation set every 5 epochs. We only keep the solutions of iteration kbest as the final solution set, where kbest is the iteration that yields the solution set with the largest validation HV.
Hardware Specification Yes All experiments are conducted on an RTX-2080Ti with 11GB memory.
Software Dependencies No The paper mentions using a 'neural network' and 'Le Net' but does not specify any software frameworks (like PyTorch, TensorFlow) or their version numbers, nor any other libraries with versions.
Experiment Setup Yes As in [40], a neural network (with 2 hidden layers, each with 20 units) is used. Following common practice [44], we obtain a set of n solutions in each iteration (with n = 15 in all experiments). For EPO, PHN-LS, PHN-EPO and COSMOS, we generate reference vectors following the strategy in [44]. For GMOOAR, the reference vectors are initialized randomly.