Hard Shape-Constrained Kernel Machines

Authors: Pierre-Cyril Aubin-Frankowski, Zoltan Szabo

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We prove performance guarantees on the solution, and demonstrate the efficiency of the approach in joint quantile regression with applications to economics and to the analysis of aircraft trajectories, among others. In this section we demonstrate the efficiency of the presented SOC technique to solve hard shapeconstrained problems.
Researcher Affiliation Academia Pierre-Cyril Aubin-Frankowski École des Ponts Paris Tech and CAS MINES Paris Tech, PSL Paris, 75006, France pierre-cyril.aubin@mines-paristech.fr Zoltán Szabó Center of Applied Mathematics, CNRS École Polytechnique, Institut Polytechnique de Paris Route de Saclay, Palaiseau, 91128, France zoltan.szabo@polytechnique.edu
Pseudocode No The paper presents mathematical formulations but no explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The code replicating our numerical experiments is available at https://github.com/PCAubin/Hard-Shape-Constraints-for-Kernels.
Open Datasets Yes We considered 9 UCI benchmarks.
Dataset Splits Yes Each dataset was split into training (70%) and test (30%) sets; the split and the experiment were repeated twenty times. For each split, we optimized the hyperparameters (σ, λf, λb) of SOC, searching over a grid to minimize the pinball loss through a 5-fold cross validation on the training set.
Hardware Specification Yes the experiments took from seconds to a few minutes to run on an i7-CPU 16GB-RAM laptop.
Software Dependencies No We used CVXGEN (Mattingley and Boyd, 2012) to solve (Pη); The paper mentions 'CVXGEN' but does not specify a version number.
Experiment Setup Yes In our experiments we used a Gaussian kernel with bandwidth σ, ridge regularization parameter λf and λb (or upper bounds λf on q P q [Q] fq 2 k and λb on b 2). We learned jointly five quantile functions (τq {0.1, 0.3, 0.5, 0.7, 0.9}). For each split, we optimized the hyperparameters (σ, λf, λb) of SOC, searching over a grid to minimize the pinball loss through a 5-fold cross validation on the training set.