Active Learning for Distributionally Robust Level-Set Estimation

Authors: Yu Inatsu, Shogo Iwazaki, Ichiro Takeuchi

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that the proposed method has theoretical guarantees on convergence and accuracy, and confirmed through numerical experiments that the proposed method outperforms existing methods.
Researcher Affiliation Academia 1Department of Computer Science, Nagoya Institute of Technology, Aichi, Japan 2RIKEN Center for Advanced Intelligence Project, Tokyo, Japan.
Pseudocode Yes Algorithm 1 Active learning for distributionally robust level-set estimation
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository.
Open Datasets No The paper defines mathematical functions (Booth, Matyas, McCormick, Styblinski-Tang) for synthetic data experiments and uses the SIR model for infection simulations, rather than using publicly available datasets. No concrete access information for any dataset is provided.
Dataset Splits No The paper does not specify traditional training, validation, or test dataset splits. It describes an active learning setup where data points are sequentially selected and added to a growing training set for a Gaussian Process model. Evaluation is based on F-score for the identified level set, not a predefined split.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments (e.g., CPU, GPU models, memory).
Software Dependencies No The paper mentions using a 'Gaussian process (GP) model' and a 'Gaussian kernel' but does not specify any particular software libraries or their version numbers (e.g., PyTorch, TensorFlow, scikit-learn, GPy) used for implementation.
Experiment Setup Yes For the infection simulations, the paper specifies parameters: 'h = 135, α = 0.9, σ2 = 0.025, σ2 f = 2502, L = 0.5, β1/2 t = 4, ϵ = 0.05'. It also mentions that 'parameters used for each experiment are listed in Table 2 in the Appendix' (which is not provided in the given text, but the explicit reference counts). For general setup, it states: 'Here, for simplicity, we set the accuracy parameter η to zero. Similarly, because of the computational cost of calculating acquisition functions, we replaced P(y Rs)1l[l(F ) t (x; 0|x , w , cs) > α] in (3.3) with zero when P(y Rs) satisfies P(y Rs) < 0.005. In other words, we used Lemma 3.3 with ζ/(|Ω| + 1) = 0.005 to approximate (3.3).'