Smooth Tchebycheff Scalarization for Multi-Objective Optimization

Authors: Xi Lin, Xiaoyuan Zhang, Zhiyuan Yang, Fei Liu, Zhenkun Wang, Qingfu Zhang

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct various experiments on diverse multi-objective optimization problems. The results confirm the effectiveness of our proposed STCH scalarization.
Researcher Affiliation Academia 1City University of Hong Kong (email: xi.lin@my.cityu.edu.hk) 2Southern University of Science and Technology. Correspondence to: Qingfu Zhang <qingfu.zhang@cityu.edu.hk>.
Pseudocode Yes Algorithm 1 STCH for Multi-Objective Optimization
Open Source Code Yes 1Our source code is available at: github.com/Xi-L/STCH.
Open Datasets Yes NYUv2 (Silberman et al., 2012) is an indoor scene understanding dataset with 3 tasks on semantic segmentation, depth estimation, and surface normal prediction. [...] Office-31 (Saenko et al., 2010) is an image classification dataset across 3 domains (Amazon, DSLR, and Webcam). [...] QM9 (Ramakrishnan et al., 2014) is a molecular property prediction dataset with 11 tasks.
Dataset Splits Yes The data split from (Lin et al., 2022a) is utilized to split the data as 60%-20%-20% for training, validation, and testing. (Office-31) [...] The data split in Navon et al. (2022) is used to divide the dataset into 110, 000 for training, 10, 000 for validation, and 10, 000 for testing. (QM9)
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, or memory specifications).
Software Dependencies No The paper mentions software like 'Lib MTL library (Lin & Zhang, 2023)' and 'Adam (Kingma & Ba, 2015)' but does not specify exact version numbers for these or other software components used in the experiments.
Experiment Setup Yes The model is trained for 200 epochs with Adam (Kingma & Ba, 2015), of which the learning rate is initially set to 10 4 with 10 5 weight decay and will be halved to 5 10 5 after 100 epochs. The batch size is set to 2. (NYUv2) [...] The learning rate is 10 4 with 10 5 weight decay. The batch size is 64 and the training epoch is 100. (Office-31) [...] The learning rate is 10 3 with the Reduce LROn Plateau scheduler. The batch size is 128 and the number of training epochs is 300. (QM9)