Piecewise Linear Regression via a Difference of Convex Functions
Authors: Ali Siahkamari, Aditya Gangrade, Brian Kulis, Venkatesh Saligrama
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section we apply our method to both synthetic and real datasets for regression and multi-class classification. The datasets were chosen to fit in the regime of n 103, d 102 as described in the introduction. All results are averaged over 100 runs and are reported with the 95% confidence interval. |
| Researcher Affiliation | Academia | 1Department of Electrical and Computer Engineering, Boston University 2Division of Systems Engineering, Boston University. Correspondence to: Ali Siahkamari <siaa@bu.edu>. |
| Pseudocode | Yes | A Parallel solver for the SRM in program (6) is given in Algorithm (1) in the Appx E via the ADMM method (Boyd et al., 2011). |
| Open Source Code | Yes | Our code along with the other algorithms is available in our Git Hub repository 2. https://github.com/Siahkamari/Piecewise-linear-regressionvia-a-difference-of-convex-functions |
| Open Datasets | Yes | Multi-class classification We used popular UCI classification datasets for testing our classification algorithm. Regression on Real Data We apply the stated methods to various moderately sized regression datasets that are available in the MATLAB statistical machine learning library. The results are presented in Fig. 3. ... See Appx. D for a description of each of the datasets studied. |
| Dataset Splits | Yes | We repeated the experiments 100 times We present the mean and 95% C.I.s on a 2-fold random cross validation set, in Fig. 4. multi-layer perceptron (neural network) with variable number of hidden layers chosen from 1 : 10 by 5-fold cross validation, Multivariate Adaptive Regression Splines (MARS) and K-nearest neighbour(K-NN) where Best value of K was chosen by 5-fold cross validation from 1 : 10. |
| Hardware Specification | No | No specific hardware details (like CPU/GPU models, memory) were mentioned for running experiments. |
| Software Dependencies | No | In both cases, we have used MATLAB Statistics and Machine learning Library for their implementation of MLP, KNN and SVM. For MARS we used the open source implementation in ARESLab toolbox implemented in MATLAB. |
| Experiment Setup | Yes | For the DC function fitting procedure, we note that that the theoretical value for the regularization weight tends to oversmooth the estimators. ... we simply set 12M = 1, and further, we choose the weight, i.e. λ in (6), by cross validation over the set 2 j ˆDn(DC1) for j [ 8 : 1]. For the regression task we use the L1 empirical loss in our algorithm, instead of L2. That is, the objective in (6) is replaced by P |yi ˆyi|. For the multi-class classification task we adopt the multiclass hinge loss to train our model, i.e. the loss j =yi max(fj(xi) fyi(xi) + 1, 0), where m is the number of classes and fj s are DC functions. multi-layer perceptron (neural network) with variable number of hidden layers chosen from 1 : 10 by 5-fold cross validation |