Comparison-Based Random Forests
Authors: Siavash Haghiri, Damien Garreau, Ulrike Luxburg
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In a set of comprehensive experiments, we then demonstrate that the proposed random forest is efficient both for classification and regression. |
| Researcher Affiliation | Academia | 1Department of Computer Science, University of T ubingen, Germany 2Max Planck Institute for Intelligent Systems, T ubingen, Germany. |
| Pseudocode | Yes | Algorithm 1 Comp Tree(S, n0): Supervised comparison tree construction |
| Open Source Code | No | The paper provides a link for a third-party tool (TSTE) used, but no explicit statement or link to the authors' own comparison-based random forest code is provided. |
| Open Datasets | Yes | MNIST (Le Cun et al., 1998) and Gisette are handwritten digit datasets. Isolet and UCIHAR are speech recognition and human activity recognition datasets respectively (Lichman, 2013). |
| Dataset Splits | Yes | We perform 10-fold cross-validation over n0 {1, 4, 16, 64} and M {1, 4, 16, 64, 256}. Since the regression datasets have no separate training and test set, we assign 90% of the items to the training and the remaining 10% to the test set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not specify the versions of any software libraries or dependencies used in the experiments. |
| Experiment Setup | Yes | We perform 10-fold cross-validation over n0 {1, 4, 16, 64} and M {1, 4, 16, 64, 256}. |