Optimal Scalarizations for Sublinear Hypervolume Regret
Authors: Qiuyi (Richard) Zhang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We support our theory with strong empirical performance of using non-linear scalarizations that outperforms both their linear counterparts and other standard multiobjective algorithms in a variety of natural settings. ... In this section, we empirically justify our theoretical results by comparing hypervolume convergence curves for multiobjective optimization in synthetic, linear bandit and blackbox optimization environments with multiple scalarizations and weight distributions. |
| Researcher Affiliation | Industry | Qiuyi (Richard) Zhang Google Deepmind qiuyiz@google.com |
| Pseudocode | Yes | Algorithm 1: EXPLOREUCB(T, D, sλ) Input :number of maximum actions T, weight distribution D , scalarization sλ 1 repeat 2 Play action en E for n i mod d 3 Let Cti be the confidence ellipsoid for Θi and let UCBi(a) = maxθ Ci θ a 4 Draw λ D, play a = argmaxa A sλ(UCBi(a)) 5 until number of rounds i exceed T/2 |
| Open Source Code | Yes | All the code is released and all data is generated synthetic or via open-sourced benchmarks. |
| Open Datasets | Yes | For our experiments, we fix our weight distribution and compare the three widely types of scalarizations that were previously mentioned: the Linear, Chebyshev, and the Hypervolume scalarization. We focus on the k = 3 setting and apply optimization for a diverse set of Pareto frontiers in the region x, y [0, 1], z > 0. We discretize our region into 30 points per dimension to form a discrete Pareto frontier and set our reference point to be at z = [ ϵ, ϵ, ϵ] for ϵ = 1e-4 and run for 10 repeats. |
| Dataset Splits | No | The paper uses synthetic datasets, linear bandit simulations, and BBOB benchmarks. For synthetic optimization, it discretizes a region into 30 points per dimension. For linear bandits, it runs for T=100, 200 rounds. For BBOB functions, it optimizes for 160 iterations with 5 repeats. However, it does not explicitly mention standard train/validation/test splits with percentages or sample counts for these setups beyond defining the overall experimental configurations. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU specifications, or memory. |
| Software Dependencies | No | The paper mentions general software components like 'Python', 'numpy', 'matplotlib', 'pandas', 'seaborn', 'vizier', but does not specify their exact version numbers. It does not list specific versions for critical libraries or solvers used in the experiments. |
| Experiment Setup | Yes | For our experiments, we fix our weight distribution and compare the three widely types of scalarizations that were previously mentioned: the Linear, Chebyshev, and the Hypervolume scalarization. We focus on the k = 3 setting and apply optimization for a diverse set of Pareto frontiers in the region x, y [0, 1], z > 0. We discretize our region into 30 points per dimension to form a discrete Pareto frontier and set our reference point to be at z = [ ϵ, ϵ, ϵ] for ϵ = 1e-4 and run for 10 repeats. ... We run our algorithm with inherent dimension d = 4 for T = 100, 200 rounds with k = 2, 6, 10. ... For scalarized approaches, we use hypervolume scalarizations with the scalarized UCB algorithm with a constant standard deviation multiplier of 1.8 and all algorithms with use a Gaussian Process as the underlying model with a standard Matérn kernel that is tuned via ARD Williams and Rasmussen [2006]. |