Active Learning Through a Covering Lens
Authors: Ofer Yehuda, Avihu Dekel, Guy Hacohen, Daphna Weinshall
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conclude with extensive experiments, evaluating Prob Cover in the low-budget regime. We show that our principled active learning strategy improves the state-of-the-art in the low-budget regime in several image recognition benchmarks. ... In Section 4 we empirically evaluate the performance of Prob Cover on several computer vision datasets, including CIFAR-10, CIFAR-100, Tiny-Image Net, Image Net and its subsets. |
| Researcher Affiliation | Academia | Ofer Yehuda , Avihu Dekel , Guy Hacohen , Daphna Weinshall School of Computer Science & Engineering Edmond and Lily Safra Center for Brain Sciences The Hebrew University of Jerusalem Jerusalem 91904, Israel {ofer.yehuda,avihu.dekel,guy.hacohen,daphna}@mail.huji.ac.il |
| Pseudocode | Yes | Algorithm 1 Prob Cover |
| Open Source Code | Yes | Code is available at https://github.com/avihu111/Typi Clust. |
| Open Datasets | Yes | We empirically evaluate the performance of Prob Cover on several computer vision datasets, including CIFAR-10, CIFAR-100, Tiny-Image Net, Image Net and its subsets. ... When considering CIFAR-10/100 and Tiny Image Net, we use as input the embedding of Sim CLR [9] across all methods. When considering Image Net we use as input the embedding of DINO [5] throughout. |
| Dataset Splits | No | The paper mentions running experiments for a fixed number of active learning rounds and reporting mean test accuracy, but it does not specify explicit training, validation, and testing dataset splits (e.g., 80/10/10) for the entire dataset used in the experiments. It focuses on how samples are queried and added to the labeled set over rounds. |
| Hardware Specification | No | The paper does not explicitly specify the hardware used for the experiments, such as GPU models, CPU types, or memory. In the checklist at the end, it states: '(d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No]' |
| Software Dependencies | No | The paper mentions using specific models and frameworks like 'Res Net-18', 'Sim CLR', 'DINO', and 'Flex Match', and an 'evaluation kit created by Munjal et al. [33]'. However, it does not provide specific version numbers for any of these software dependencies or underlying libraries (e.g., Python, PyTorch versions), which is required for reproducibility. |
| Experiment Setup | Yes | Details concerning specific networks and hyper-parameters can be found in App. C, and in the attached code in the supplementary material. |