Pareto Frontier Learning with Expensive Correlated Objectives
Authors: Amar Shah, Zoubin Ghahramani
ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate the power of modelling dependencies between objectives on a range of synthetic and real world multi-objective optimization problems. In this section, we provide empirical comparisons assessing the performance of the proposed CEIPV method. |
| Researcher Affiliation | Academia | Amar Shah AS793@CAM.AC.UK Zoubin Ghahramani ZOUBIN@ENG.CAM.AC.UK Machine Learning Group, Department of Engineering, University of Cambridge |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | We choose three such functions for experimentation: oka2 (Okabe et al., 2004), vlmop3 (Veldhuizen and Lamont, 1999) and dtlz1a (Deb et al., 2001). boston, involves training a 2 hidden layer neural network on a random train/test split of the Boston Housing dataset (Bache and Lichman, 2013). SW-LLVM data set of Siegmund et al. (2012). |
| Dataset Splits | No | The paper mentions 'random train/test split' for the Boston Housing dataset but does not provide specific percentages, sample counts, or detailed methodology for train, validation, or test splits across any of the datasets used. |
| Hardware Specification | No | The paper does not provide specific hardware details (such as GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | Each experiment is initialized with function evaluations at 5 input locations sampled independently and uniformly at random over the input space. To perform a fully Bayesian treatment of the hyperparameters, we place priors over and sample them from their joint posterior given observed data using slice sampling (Neal, 2003). In line with Snoek et al. (2012), we choose to use ARD Mat ern 5/2 kernels over the input space, defined as k M52(x, x ) = θ2 0 d=1 (xd x d)2/θ2 d. For the CEIPV algorithms, the amplitude hyperparameter, θ0, is set to 1 to avoid over parameterization. |