Active Learning and Best-Response Dynamics
Authors: Maria-Florina F Balcan, Christopher Berlind, Avrim Blum, Emma Cohen, Kaushik Patnaik, Le Song
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We then show experimentally that when combined with recent agnostic active learning algorithms, this process can achieve low error from very few queries, performing substantially better than active or passive learning without these denoising dynamics as well as passive learning with denoising. In Section 5 we implement this algorithm and show that experimentally it learns a low-error decision rule even in cases where the initial value of η is quite high. 5 Experiments In our experiments we seek to determine whether our overall algorithm of best-response dynamics combined with active learning is effective at denoising the sensors and learning the target boundary. The experiments were run on synthetic data, and compared active and passive learning (with Support Vector Machines) both preand post-denoising. |
| Researcher Affiliation | Academia | Maria-Florina Balcan Carnegie Mellon ninamf@cs.cmu.edu Christopher Berlind Georgia Tech cberlind@gatech.edu Avrim Blum Carnegie Mellon avrim@cs.cmu.edu Emma Cohen Georgia Tech ecohen@gatech.edu Kaushik Patnaik Georgia Tech kpatnaik3@gatech.edu Le Song Georgia Tech lsong@cc.gatech.edu |
| Pseudocode | No | The paper describes the algorithm steps in paragraph text and a high-level diagram (Figure 1), but does not present structured pseudocode or an algorithm block. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code for the methodology described, nor does it provide any links to a code repository. |
| Open Datasets | No | Synthetic data. The N sensor locations were generated from a uniform distribution over the unit ball in R2, and the target boundary was fixed as a randomly chosen linear separator through the origin. |
| Dataset Splits | No | The paper mentions running experiments on synthetic data and averaging over independent trials, but does not specify details regarding train/validation/test dataset splits, sample counts for splits, or cross-validation setup needed for reproducibility. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments (e.g., specific GPU/CPU models, memory, or cloud instance types). |
| Software Dependencies | No | The paper mentions using Support Vector Machines and an active learning algorithm but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, scikit-learn, or specific solver versions). |
| Experiment Setup | Yes | In the denoising phase of the experiments, the sensors applied the basic majority consensus dynamic. That is, each sensor was made to update its label to the majority label of its neighbors within distance r from its location. We used radius values r {0.025, 0.05, 0.1, 0.2}. Updates of sensor labels were carried out both through simultaneous updates to all the sensors in each iteration (synchronous updates) and updating one randomly chosen sensor in each iteration (asynchronous updates). ... Here we report the results for N = 10000 and r = 0.1. ... For the active algorithm, we used parameters asymptotically matching those given in Awasthi et al [1] for a uniform distribution. For SVM, we chose for each experiment the regularization parameter that resulted in the best performance. |