Pareto Multi-Task Learning

Authors: Xi Lin, Hui-Ling Zhen, Zhenhua Li, Qing-Fu Zhang, Sam Kwong

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results confirm that the proposed algorithm can generate well-representative solutions and outperform some state-of-the-art algorithms on many multi-task learning applications.
Researcher Affiliation Collaboration 1City University of Hong Kong, 2Nanjing University of Aeronautics and Astronautics
Pseudocode Yes Algorithm 1 Pareto MTL Algorithm
Open Source Code Yes 1The code is available at: https://github.com/Xi-L/Pareto MTL
Open Datasets Yes Multi MNIST dataset [31]... original MNIST dataset [32]... Fashion MNIST items [33]... apolloscape autonomous driving dataset [37, 38]
Dataset Splits No The paper uses various datasets but does not explicitly provide details on train/validation/test splits (e.g., percentages, sample counts, or explicit standard split citations) for reproducibility.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running the experiments.
Software Dependencies No The paper does not provide specific version numbers for software dependencies (e.g., programming languages, libraries, or frameworks).
Experiment Setup No The paper mentions using specific neural network architectures (Le Net, Res Net18) but does not provide specific hyperparameters (e.g., learning rate, batch size, epochs, optimizer settings) or other detailed configuration information for the experimental setup.