FERERO: A Flexible Framework for Preference-Guided Multi-Objective Learning
Authors: Lisha Chen, A Saif, Yanning Shen, Tianyi Chen
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on multiple benchmarks demonstrate the proposed method is very competitive in finding preference-guided optimal solutions. |
| Researcher Affiliation | Academia | Lisha Chen1, AFM Saif1, Yanning Shen2, Tianyi Chen1. 1Rensselaer Polytechnic Institute, 2University of California, Irvine |
| Pseudocode | Yes | Algorithm 1 A meta FERERO algorithm |
| Open Source Code | Yes | Code is available at https://github.com/lisha-chen/FERERO/. |
| Open Datasets | Yes | We use the Librispeech (100 hours) [47], and AISHELL v1 [5] datasets for multi-lingual speech recognition. |
| Dataset Splits | No | The paper states, 'In each of the three datasets, there are 120k samples for training and 20k samples for testing,' but does not explicitly mention a separate validation split or its size/percentage. |
| Hardware Specification | Yes | All experiments were conducted on a server with an Intel i9-7920X CPU, two NVIDIA A5000 GPUs and two NVIDIA A4500 GPUs. |
| Software Dependencies | Yes | We use the Pymoo 0.6.1 library to compute the hypervolume. |
| Experiment Setup | Yes | For our method, we solve the subprogram using PGD with a step size 0.1 up to an error of 10 5 or with a maximum of 250 iterations. In the experiments, we set the parameter ch = 1 for the subprogram if not otherwise specified. For all methods, we use the SGD optimizer with batch size 256. Note that, for our stochastic method, we use batch size 128 for each batch in the double sampling. |