Joint quantile regression in vector-valued RKHSs
Authors: Maxime Sangnier, Olivier Fercoq, Florence d'Alché-Buc
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments on benchmark and real datasets highlight the enhancements of our approach regarding the prediction error, the crossing occurrences and the training time. |
| Researcher Affiliation | Academia | Maxime Sangnier Olivier Fercoq Florence d Alch e-Buc LTCI, CNRS, T el ecom Paris Tech Universit e Paris-Saclay 75013, Paris, France |
| Pseudocode | Yes | Algorithm 1 Primal-Dual Coordinate Descent. |
| Open Source Code | No | The paper mentions using CVXOPT, a third-party tool, but does not provide any link or explicit statement about releasing the source code for their own methodology or implementation. The footnotes point to data sources, not code. |
| Open Datasets | Yes | To present an honorable comparison of these four methods, we did not choose datasets for the benefit of our method but considered the ones used in [26]. These 20 datasets (whose names are indicated in Table 1) come from the UCI repository and three R packages: quantreg, alr3 and MASS. ... Data are available at www.census.gov/census2000/PUMS5.html and www.nber.org/data/ vital-statistics-natality-data.html. |
| Dataset Splits | Yes | Results are given in Table 1 thanks to the mean and the standard deviation of the test losses recorded on 20 random splits train-test with ratio 0.7-0.3. |
| Hardware Specification | No | The paper mentions 'CPU time' in Table 2, but it does not specify any details about the CPU model, GPU, or any other hardware components used for the experiments. |
| Software Dependencies | Yes | [2] M.S. Anderson, J. Dahl, and L. Vandenberghe. CVXOPT: A Python package for convex optimization, version 1.1.5., 2012. |
| Experiment Setup | Yes | Quantile levels of interest are τ = (0.1, 0.3, 0.5, 0.7, 0.9). ... The parameter C is chosen by cross-validation (minimizing the pinball loss) inside a logarithmic grid (10 5, 10 4, . . . , 105) for all methods and datasets. For our approach (JQR), the parameter γ is chosen in the same grid as C with extra candidates 0 and + . ... Parameters for the models are: (C, γ) = (102, 10 2). |