Improving Conditional Coverage via Orthogonal Quantile Regression
Authors: Shai Feldman, Stephen Bates, Yaniv Romano
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We use these (and other) metrics in Section 5 to study our proposal on simulated data and nine real benchmark data sets. We find that our training scheme yields improvements when used together with both a classic [6] and a more recent [15] quantile regression method. |
| Researcher Affiliation | Academia | Shai Feldman Department of Computer Science Technion, Israel shai.feldman@cs.technion.ac.il; Stephen Bates Departments of Statistics and of EECS UC Berkeley stephenbates@cs.berkeley.edu; Yaniv Romano Departments of Electrical and Computer Engineering and of Computer Science Technion, Israel yromano@technion.ac.il |
| Pseudocode | No | The paper does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor any structured, code-like descriptions of a procedure. |
| Open Source Code | Yes | Software implementing the proposed method and reproducing our experiments can be found at https://github.com/Shai128/oqr |
| Open Datasets | Yes | Next, we compare the performance of the proposed orthogonal QR to vanilla QR on nine benchmarks data sets as in [15, 30]: Facebook comment volume variants one and two (facebook_1, facebook_2), blog feedback (blog_data), physicochemical properties of protein tertiary structure (bio), forward kinematics of an 8 link robot arm (kin8nm), condition based maintenance of naval propulsion plants (naval), and medical expenditure panel survey number 19-21 (meps_19, meps_20, and meps_21). See Section S3.2 of the Supplementary Material for details about these data sets. |
| Dataset Splits | Yes | We randomly split each data set into disjoint training (54%), validation (6%), and testing sets (40%). |
| Hardware Specification | No | The paper mentions training a 'deep neural network' and conducting experiments, but it does not specify any hardware details such as GPU or CPU models used for these experiments. |
| Software Dependencies | No | The paper points to a GitHub repository for its code, but the paper text itself does not list specific software dependencies with their version numbers (e.g., Python 3.x, PyTorch 1.x, CUDA 11.x). |
| Experiment Setup | No | The paper states that 'Section S4 of the Supplementary Material gives the details about the network architecture, training strategy, and details about this experimental setup.' This indicates that specific experimental setup details, such as hyperparameters and training configurations, are provided, but they are not present in the main text. |