Sparse High-Dimensional Isotonic Regression
Authors: David Gamarnik, Julia Gaudio
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We close with experiments on cancer classification, and show that our method significantly outperforms several standard methods. |
| Researcher Affiliation | Academia | David Gamarnik Sloan School of Management Massachusetts Institute of Technology Cambridge, MA 02139 gamarnik@mit.edu Julia Gaudio Operations Research Center Massachusetts Institute of Technology Cambridge, MA 02139 jgaudio@mit.edu |
| Pseudocode | Yes | Algorithm 1 Integer Programming Isotonic Regression (IPIR); Algorithm 2 Linear Programming Support Recovery (LPSR); Algorithm 3 Sequential Linear Programming Support Recovery (S-LPSR); Algorithm 4 Two Stage Isotonic Regression (TSIR). |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | The data is drawn from the COSMIC database [9], which is widely used in quantitative research in cancer biology. [9] Simon A. Forbes, Nidhi Bindal, Sally Bamford, Charlotte Cole, Chai Yin Kok, David Beare, Mingming Jia, Rebecca Shepherd, Kenric Leung, Andrew Menzies, Jon W. Teague, Peter J. Campbell, Michael R. Stratton, and P. Andrew Futreal. COSMIC: mining complete cancer genomes in the Catalogue of Somatic Mutations in Cancer. Nucleic Acids Research, 39(1):D945 D950, 2011. |
| Dataset Splits | No | The paper specifies training and test data, but does not explicitly mention validation splits or their sizes/percentages. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | Yes | All algorithms were implemented in Java version 8, using Gurobi version 6.0.0. |
| Experiment Setup | Yes | We keep s = 3 fixed and vary d and n. The error is Gaussian with mean 0 and variance 0.1, independent across coordinates. For k-Nearest Neighbors, k {1, 3, 5, 7, 9, 11, 15}, and for SVM, C {10, 100, 500, 1000} and m {1, 2, 3, 4}. |