Neural Pseudo-Label Optimism for the Bank Loan Problem
Authors: Aldo Pacchiano, Shaun Singh, Edward Chou, Alex Berg, Jakob Foerster
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the performance of PLOT4 on three binary classification problems adapted to the BLP setting. In the top row of Figure 1, we provide cumulative regret plots for the above datasets and methods. |
| Researcher Affiliation | Industry | Aldo Pacchiano Microsoft Research Shaun Singh FAIR Edward Chou FAIR Alexander C. Berg FAIR Jakob Foerster FAIR |
| Pseudocode | Yes | Algorithm 1 Pseudo-Labels for Optimism (PLOT) |
| Open Source Code | Yes | Google Colab: shorturl.at/pz DY7 We have included our code in the supplemental material. |
| Open Datasets | Yes | We focus on two datasets from the UCI Collection [12], the Adult dataset and the Bank dataset. Additionally we make use of MNIST [26] (d=784). |
| Dataset Splits | No | The paper describes an online learning setting where data is observed sequentially and mentions training on 'accepted points' but does not specify explicit train/validation/test splits with percentages or counts for reproducibility. |
| Hardware Specification | Yes | Each experiment runs on a single Nvidia Pascal GPU, and replicated experiments, distinct datasets, and methods can be run in parallel, depending on GPU availability. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | In our experiments, we set the PLOT weight parameter to 1, equivalent to simply adding the pseudolabel point to the dataset. We set the PLOT radius parameter to , thus including all prior observed points in the training dataset. For computational efficiency, we run our method on batches of data, with batch size n = 32. We average results over 5 runs, running for a horizon of t = 2000 time-steps. We report results for a two-layer, 40-node, fully-connected neural network. At each timestep, we train this neural network on the above data, for a fixed number of steps. |