Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Active Learning for Level Set Estimation Using Randomized Straddle Algorithms
Authors: Yu Inatsu, Shion Takeno, Kentaro KUTSUKAKE, Ichiro Takeuchi
TMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we validate the applicability of the proposed method through numerical experiments using synthetic and real data. ... 5 Numerical Experiments We confirm the practical performance of the proposed method using synthetic functions and real-world data. |
| Researcher Affiliation | Academia | Yu Inatsu EMAIL Department of Computer Science, Nagoya Institute of Technology; Shion Takeno EMAIL Department of Mechanical Systems Engineering, Graduate School of Engineering, Nagoya University RIKEN Center for Advanced Intelligence Project; Kentaro Kutsukake EMAIL Institute of Materials and Systems for Sustainability, Nagoya University Department of Materials Process Engineering, Graduate School of Engineering, Nagoya University; Ichiro Takeuchi EMAIL Department of Mechanical Systems Engineering, Graduate School of Engineering, Nagoya University RIKEN Center for Advanced Intelligence Project |
| Pseudocode | Yes | Finally, we give the pseudocode of the proposed algorithm in Algorithm 1. Algorithm 1 Active Learning for Level Set Estimation Using Randomized Straddle Algorithms. |
| Open Source Code | No | The paper does not contain any explicit statement about the release of open-source code nor provides a link to a code repository. It only refers to Open Review for the review process, which does not guarantee code availability for the methodology described. |
| Open Datasets | Yes | Finally, we validate the applicability of the proposed method through numerical experiments using synthetic and real data. ... The settings for the sinusoidal and Himmelblau functions are the same as those used in Zanette et al. (2019). ... In this section, we conducted experiments using the carrier lifetime value, a measure of the quality performance of silicon ingots used in solar cells (Kutsukake et al., 2015). |
| Dataset Splits | No | The paper describes how the input space was defined for synthetic data and how points were selected for evaluation in an active learning setting. For instance: 'The input space X was defined as a set of grid points that uniformly cut the region [l1, u1] × [l2, u2] into 50 × 50.' However, it does not provide specific details on traditional training, test, or validation splits for model evaluation in the conventional supervised learning sense, as the active learning process involves iterative data acquisition and model updating. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiments. It mentions general concepts like 'GPs (Rasmussen & Williams, 2005)' but no concrete software implementations or versions. |
| Experiment Setup | Yes | In all experiments, we used the following Gaussian kernel: k(x, x') = σf^2 exp(− ||x − x'||^2 / (2L^2)). The threshold θ and the parameters used for each setting are summarized in Table 1 (and Table 2 for infinite X, and mentions Matérn 3/2 for real data). We used β1/2 t = 3 as the confidence parameter required for MILE and Straddle, and β1/2 t = √2 log(2500 π^2t^2/(6 × 0.05)) for LSE. Under this setup, one initial point was taken at random and the algorithm was run until the number of iterations reached 300. This simulation was repeated 100 times. |