On-the-Job Learning with Bayesian Decision Theory
Authors: Keenon Werling, Arun Tejasvi Chaganty, Percy S. Liang, Christopher D. Manning
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We tested our approach on three datasets named-entity recognition, sentiment classification, and image classification. |
| Researcher Affiliation | Academia | Keenon Werling Department of Computer Science Stanford University keenon@cs.stanford.eduArun Chaganty Department of Computer Science Stanford University chaganty@cs.stanford.eduPercy Liang Department of Computer Science Stanford University pliang@cs.stanford.eduChristopher D. Manning Department of Computer Science Stanford University manning@cs.stanford.edu |
| Pseudocode | Yes | Algorithm 1 Approximating expected utility with MCTS and progressive widening |
| Open Source Code | Yes | An open-source implementation of our system, dubbed LENSE for Learning from Expensive Noisy Slow Experts is available at http://www.github.com/keenon/lense. |
| Open Datasets | Yes | All code, data, and experiments for this paper are available on Coda Lab at https://www.codalab.org/worksheets/0x2ae89944846444539c2d08a0b7ff3f6f/. |
| Dataset Splits | Yes | All code, data, and experiments for this paper are available on Coda Lab at https://www.codalab.org/worksheets/0x2ae89944846444539c2d08a0b7ff3f6f/. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models or memory specifications used for experiments. |
| Software Dependencies | No | The paper mentions software components and models like 'CRF prediction model' and 'Ada Grad', but it does not specify version numbers for any libraries or frameworks. |
| Experiment Setup | No | The paper describes the general experimental process and baselines but does not specify concrete hyperparameter values or detailed training configurations in the main text. |