Acquisition Conditioned Oracle for Nongreedy Active Feature Acquisition

Authors: Michael Valancius, Maxwell Lennon, Junier Oliva

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show the superiority of the ACO to state-of-the-art AFA methods when acquiring features for both predictions and general decision-making.
Researcher Affiliation Academia 1Department of Biostatistics, University of North Carolina, Chapel Hill, North Carolina, USA 2Department of Computer Science, University of North Carolina, Chapel Hill, North Carolina, USA.
Pseudocode Yes Algorithm 1 Acquisition Conditioned Oracle
Open Source Code Yes Code for the AACO policy can be found at https://github.com/lupalab/aaco.
Open Datasets Yes We begin with a study of the CUBE-σ = 0.3 dataset (as described by Shim et al. (2018)), a synthetic classification dataset designed for feature acquisition tasks.
Dataset Splits Yes After rolling the AACO policy out on the validation dataset, we trained gradient boosted classification trees to mimic the actions in this data.
Hardware Specification Yes AACO models were ran on individual Titan Xp GPUs.
Software Dependencies No The paper mentions using 'gradient boosted trees' but does not specify the software library (e.g., XGBoost, LightGBM) or its version number. No other specific software dependencies with version numbers are provided.
Experiment Setup Yes For the AACO approximations (3.4), we approximated the distribution of p(y, xu|xo) in AACO using a k = 5 nearest neighbors density estimate. Furthermore, we enumerated all potential subsets in moderate dimensional problems but took random subsamples (10,000) in higher dimensions. To standardize the relative importance of each feature, all features were mean-centered and scaled to have a variance of 1.