Learning Active Learning from Data
Authors: Ksenia Konyushkova, Raphael Sznitman, Pascal Fua
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that LAL works well on real data from several different domains such as biomedical imaging, economics, molecular biology and high energy physics. This query selection strategy outperforms competing methods without requiring hand-crafted heuristics and at a comparatively low computational cost. |
| Researcher Affiliation | Academia | Ksenia Konyushkova CVLab, EPFL Lausanne, Switzerland ksenia.konyushkova@epfl.ch Sznitman Raphael ARTORG Center, University of Bern Bern, Switzerland raphael.sznitman@artorg.unibe.ch Pascal Fua CVLab, EPFL Lausanne, Switzerland pascal.fua@epfl.ch |
| Pseudocode | Yes | Algorithm 1 DATAMONTECARLO, Algorithm 2 BUILDLALINDEPENDENT, Algorithm 3 BUILDLALITERATIVE |
| Open Source Code | Yes | The code is made available at https://github.com/ksenia-konyushkova/LAL. |
| Open Datasets | Yes | BRATS competition [23], Credit card [4], Splice [19], Higgs [1] |
| Dataset Splits | No | The paper states 'In all AL experiments we select samples from a training set and report the classification performance on an independent test set.' and mentions 'random permutations of training and testing splits', but does not specify details for validation splits or explicit percentages/counts for all three splits (train/validation/test). |
| Hardware Specification | No | The paper mentions 'Run times of a Python-based implementation running on 1 core are given in Tab. 1' but does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for experiments. |
| Software Dependencies | No | The paper mentions using 'Random Forest (RF) classifiers' and 'a RF regressor' and 'Python-based implementation', but it does not provide specific version numbers for these software components or other libraries. |
| Experiment Setup | No | The paper describes features used for the learning process state (e.g., 'predicted probability p(y = 0|Lt, x)', 'forest variance', 'average tree depth'), and states that 'In most of the experiments, we use Random Forest (RF) classifiers for f and a RF regressor for g.', but it does not provide specific hyperparameters for these RF models (e.g., number of trees, max depth) or training schedules in the main text. It mentions 'For additional implementational details... we refer the reader to the supplementary material.' |