Uncertainty Sampling is Preconditioned Stochastic Gradient Descent on Zero-One Loss
Authors: Stephen Mussmann, Percy S. Liang
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on synthetic and real datasets support this connection. |
| Researcher Affiliation | Academia | Stephen Mussmann Department of Computer Science Stanford University Stanford, CA mussmann@stanford.edu Percy Liang Department of Computer Science Stanford University Stanford, CA pliang@cs.stanford.edu |
| Pseudocode | Yes | Algorithm 1 Uncertainty Sampling |
| Open Source Code | Yes | The code, data, and experiments for this paper are available on the Coda Lab platform at https://worksheets.codalab.org/worksheets/0xf8dfe5bcc1dc408fb54b3cc15a5abce8/. |
| Open Datasets | Yes | We collected 25 datasets from Open ML (retrieved August, 2017) that had a large number of data points and where logistic regression outperformed the majority classifier (predict the majority label). |
| Dataset Splits | No | The paper states, 'We further subsampled each dataset to have 10,000 points, which was divided into 7000 training points and 3000 test points.' It mentions training and test sets, but no explicit validation set. |
| Hardware Specification | No | The paper does not specify any hardware details like GPU/CPU models or specific machine types used for experiments. |
| Software Dependencies | No | The paper mentions logistic regression and general concepts, but does not list specific software dependencies with version numbers (e.g., Python, PyTorch, scikit-learn versions). |
| Experiment Setup | Yes | We ran uncertainty sampling on each dataset with random seed sets of sizes that are powers of two from 2 to 4096 and then 7000. We stopped when uncertainty sampling did not choose an unlabeled point for 1000 iterations. |