Neural Active Learning with Performance Guarantees

Authors: Zhilei Wang, Pranjal Awasthi, Christoph Dann, Ayush Sekhari, Claudio Gentile

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We prove joint guarantees on the cumulative regret and number of requested labels which depend on the complexity of the labeling function at hand.
Researcher Affiliation Collaboration Zhilei Wang New York University New York, NY 10012 zhileiwang92@gmail.com Pranjal Awasthi Google Research New York, NY 10011 pranjalawasthi@google.com Christoph Dann Google Research New York, NY 10011 chrisdann@google.com Ayush Sekhari Cornell University Ithaca, NY 14850 ayush.sekhari@gmail.com Claudio Gentile Google Research New York, NY 10011 cgentile@google.com
Pseudocode Yes Algorithm 1: Frozen NTK Selective Sampler. Input: Confidence level δ, complexity parameter S, network width m, and depth n . Initialization:...
Open Source Code No The paper does not contain any statement about making its source code publicly available, nor does it provide a link to a code repository.
Open Datasets No The paper is theoretical and does not describe experiments run on a specific, publicly available dataset. It refers to a theoretical construct: "on an i.i.d. sample (x1, y1), . . . , (x T , y T ) D".
Dataset Splits No The paper is theoretical and does not describe any empirical experiments, therefore it does not mention specific training, validation, or testing dataset splits.
Hardware Specification No The paper is purely theoretical and does not describe any empirical experiments that would require specific hardware. Therefore, no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and focuses on mathematical proofs and algorithm design. It does not describe empirical experiments that would require specific software dependencies with version numbers for reproducibility.
Experiment Setup No The paper is theoretical and does not describe empirical experiments. Therefore, it does not include specific experimental setup details such as hyperparameters or training configurations.