Adaptive Algorithms for Relaxed Pareto Set Identification

Authors: Cyrille KONE, Emilie Kaufmann, Laura Richert

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We showcase the good practical performance of Adaptive Pareto Exploration on a real-world scenario, in which we adaptively explore several vaccination strategies against Covid-19 in order to find the optimal ones when multiple immunogenicity criteria are taken into account. ... Then, Section 5 presents the result of a numerical study on synthetic datasets, one of them being inspired by a Covid-19 vaccine clinical trial. It showcases the good empirical performance of APE compared to existing algorithms, and illustrates the impact of the different relaxations.
Researcher Affiliation Academia 1 Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9198-CRISt AL, F-59000 Lille, France 2 Univ. Bordeaux, Inserm, Inria, BPH, U1219, Sistm, F-33000 Bordeaux, France
Pseudocode Yes Algorithm 1: '1-APE-k: Adaptive Pareto Exploration for '1-PSI-k
Open Source Code No The paper does not provide any explicit statement about making its source code available or include a link to a code repository.
Open Datasets No The paper references the COV-BOOST [24] study and states it uses its average (log) outcomes and variances to simulate a multivariate Gaussian bandit. While the study [24] is cited, the paper does not provide concrete access (e.g., a direct link, DOI, or repository) to the processed or raw dataset used for their simulations/experiments.
Dataset Splits No The paper describes simulating data (from COV-BOOST study and random Bernoulli instances) and running experiments, but it does not specify any training, validation, or test dataset splits for these simulations.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, cloud instance types) used for running its experiments.
Software Dependencies No The paper describes its algorithms and experiments but does not list any specific software dependencies with version numbers (e.g., Python, PyTorch, or other libraries).
Experiment Setup Yes In this experiment, we set '1 = 0, δ = 0.1 and compare PSI-Unif-Elim to 0-APE-k (called APE-k in the sequel) for different values of k. The empirical distribution of the sample complexity of the algorithms, averaged over 2000 independent runs, are reported in Figure 1. ... We ran the previous algorithms on 2000 randomly generated multi-variate Bernoulli instances, with K = 5 arms and different values of the dimension D. We set δ = 0.1 and '1 = 0.005 (to have reasonable running time).