Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Pareto-frontier Entropy Search with Variational Lower Bound Maximization
Authors: Masanori Ishikura, Masayuki Karasuyama
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical evaluation demonstrates the effectiveness of the proposed method particularly when the number of objective functions is large. |
| Researcher Affiliation | Academia | 1Department of Computer Science, Nagoya Institute of Technology, Aichi, Japan. Correspondence to: Masayuki Karasuyama <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Pseudo-code of PFEV |
| Open Source Code | No | The paper does not provide explicit statements about releasing their source code, a direct link to a code repository for their implementation, or mention of code in supplementary materials. It mentions using 'GPy (GPy, since 2012)' which is a third-party package. |
| Open Datasets | Yes | We used benchmark problems called Fonseca-Fleming (d, L) = (2, 2), Kursawe (d, L) = (3, 2), Viennet (d, L) = (2, 3) and FES3 (d, L) = (3, 4) problems (for details, see Appendix J). Further, we combine multiple problems having the same input dimensions, i.e., we created Fonseca+Viennet (d, L) = (2, 5) and FES3+Kursawe (d, L) = (3, 6). Experiments were conducted on four datasets: Abalone and Waveform (both with 3 classes), and Pageblocks and Gesturephase (both with 5 classes). |
| Dataset Splits | Yes | For each dataset, we split the original data into training and test sets with an 8:2 ratio. |
| Hardware Specification | No | The paper mentions CPU time evaluation (Table 1) but does not specify any particular CPU models, GPU models, or other specific hardware components used for running experiments. |
| Software Dependencies | No | Bayesian optimization was implemented by a Python package called GPy (GPy, since 2012). ... we consider optimizing class weights in multi-class classification problems using Light GBM (Ke et al., 2017) as the base model. ... We used Bo Torch s q Hypervolume Knowledge Gradient implementation for the HVKG baseline. |
| Experiment Setup | Yes | Each evaluation run 10 times. ... All methods used GPs for f l x with a kernel function k(x, x ) = exp( x x 2 2/(2ℓ2 RBF)), where ℓRBF R is a hyper-parameter. The marginal likelihood maximization was performed at every iteration to optimize ℓRBF. The number of samples of the optimal value or the Pareto-frontier in MESMO, PFES, {PF}2ES, and PFEV was 10, each of which was performed by NSGA-II (1, 000 generations and population size 50). We used the DIRECT algorithm (Jones et al., 1993) for the acquisition function maximization. We selected 5 random x as the initial observations in D. ... The GP hyperparameter σ2 noise is fixed as 10 4. In Par EGO, the coefficient parameter in the augmented Tchebycheff function was set 0.05 as shown in (Knowles, 2006). In EHVI, the two reference points are required, shown as vref and wref in (Shah & Ghahramani, 2016). The worst point vector vref is defined by subtracting 10 4 from the vector consisting of the minimum value of each dimension of yi in the training data. On the other hand, the ideal point vector wref is defined by adding 1 to the vector consisting of the maximum value of each dimension of yi. |