A Multi-objective / Multi-task Learning Framework Induced by Pareto Stationarity
Authors: Michinari Momma, Chaosheng Dong, Jia Liu
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate not only the method achieves competitive performance with existing methods, but also it allows us to achieve the performance from different forms of preferences. |
| Researcher Affiliation | Collaboration | 1Amazon.com Inc. 2The Ohio State University. |
| Pseudocode | Yes | Algorithm 1 XWC-MGDA |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code for the described methodology or links to a code repository. |
| Open Datasets | Yes | For image classification, we use three datasets: (1) Multi MNIST (Sabour et al., 2017), (2) Multi-Fashion (Xiao et al., 2017), and (3) Multi-Fashion+MNIST (Lin et al., 2019b). |
| Dataset Splits | No | In each dataset, there are 120,000 samples in the training set and 20,000 samples in the test set. No specific information about a validation set split is provided. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running experiments. |
| Software Dependencies | No | The paper mentions models like 'Le Net' and 'fully connected feed-forward neural network (FNN)' but does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | No | The paper mentions architectural choices (Le Net, 4-layer FNN) and loss functions (MSE, SBCE), and using a random seed, but does not provide specific hyperparameter values such as learning rate, batch size, or optimizer settings for the experiments. |