Towards optimally abstaining from prediction with OOD test examples

Authors: Adam Kalai, Varun Kanade

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our primary contributions in this paper our mathematical; the contributions, limitations and open problems in mathematical terms have already been discussed in the body of the paper. ... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A] (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [N/A] (c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [N/A] (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [N/A]
Researcher Affiliation Collaboration Adam Tauman Kalai Microsoft Research Varun Kanade University of Oxford
Pseudocode Yes Algorithm MMA Input: x, x Xn, y Y n, h : X Y , and approximate loss-maximizer O Output: a [0, 1]n Find and output point in convex set K: K := ˆa [0, 1]n : Lx(VS( x, y), h, ˆa) min a [0,1]n Lx(VS( x, y), h, a) + 1/n by running the Ellipsoid algorithm using the following separation oracle SEP(a) to K: 1. If a / [0, 1]n, i.e., ai / [0, 1], then output unit vector v with vi = 1 if ai > 1 or vi = 1 if ai < 0. 2. Otherwise, output separator v := c ℓ(g(x1), h(x1)), . . . , c ℓ(g(xn), h(xn)) , where g := O( x, y, h, a). // g VS( x, y) maximizes ℓx(g, h, a) to within 1/(3n). (Figure 2)
Open Source Code No The paper is theoretical and does not mention releasing any source code. Under the checklist question '3. If you ran experiments...', point (a) 'Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?' is marked '[N/A]'. This indicates no source code is provided.
Open Datasets No The paper is purely theoretical and does not conduct experiments on real-world datasets. Under the checklist questions related to experiments and data usage (points 3 and 4), all are marked '[N/A]' (Not Applicable), indicating no datasets were used in an empirical study.
Dataset Splits No The paper is purely theoretical and does not conduct experiments or specify dataset splits for training, validation, or testing. Under the checklist question '3. If you ran experiments...', point (b) 'Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?' is marked '[N/A]'.
Hardware Specification No The paper is purely theoretical and does not describe any experimental hardware. Under the checklist question '3. If you ran experiments...', point (d) 'Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?' is marked '[N/A]'.
Software Dependencies No The paper is purely theoretical and does not describe specific software dependencies with version numbers for experimental replication. Under the checklist question '3. If you ran experiments...', point (a) 'Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?' is marked '[N/A]'. This indicates no software dependencies for experiments are listed.
Experiment Setup No The paper is purely theoretical and does not describe an experimental setup with specific hyperparameters or training configurations. Under the checklist question '3. If you ran experiments...', point (b) 'Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?' is marked '[N/A]'.