Maximum Margin Interval Trees

Authors: Alexandre Drouin, Toby Hocking, Francois Laviolette

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Section 5 we show that our algorithm achieves state-of-the-art prediction accuracy in several real and simulated data sets. In Section 5.1, we show that, when tested on a variety real-world data sets, the algorithm achieved a time complexity of O(n log n) in this case also. 5.2 MMIT recovers a good approximation in simulations with nonlinear patterns 5.3 Empirical evaluation of prediction accuracy
Researcher Affiliation Academia Alexandre Drouin Département d informatique et de génie logiciel Université Laval, Québec, Canada alexandre.drouin.8@ulaval.ca Toby Dylan Hocking Mc Gill Genome Center Mc Gill University, Montréal, Canada toby.hocking@r-project.org François Laviolette Département d informatique et de génie logiciel Université Laval, Québec, Canada francois.laviolette@ift.ulaval.ca
Pseudocode Yes The proof of this statement is available in the supplementary material, along with a detailed pseudocode and implementation details.
Open Source Code Yes An implementation is available at https://git.io/mmit. Implementation: https://git.io/mmit
Open Datasets Yes We ran our algorithm (MMIT) with both squared and linear hinge loss solvers on a variety of real-world data sets of varying sizes (Rigaill et al., 2013; Lichman, 2013). Results in UCI data sets The next two data sets are regression problems taken from the UCI repository (Lichman, 2013). Data: https://git.io/mmit-data
Dataset Splits Yes Evaluation protocol To evaluate the accuracy of the algorithms, we performed 5-fold crossvalidation and computed the mean squared error (MSE) with respect to the intervals in each of the five testing sets (Figure 5). At each step of the cross-validation, another cross-validation (nested within the former) was used to select the hyperparameters of each algorithm based on the training data.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models, processor types, or memory amounts used for running the experiments.
Software Dependencies No The paper states: 'The versions of the software used in this work are also provided in the supplementary material.' However, specific version numbers for key software components are not provided within the main body of the paper.
Experiment Setup No The paper states: 'The hyperparameters selected for MMIT are available in the supplementary material.' However, specific values for hyperparameters or other training configurations are not provided directly in the main text.