HDI-Forest: Highest Density Interval Regression Forest
Authors: Lin Zhu, Jiaxing Lu, Yihong Chen
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on benchmark datasets show that HDI-Forest significantly outperforms previous approaches, reducing the average PI width by over 20% while achieving the same or better coverage probability. |
| Researcher Affiliation | Industry | Lin Zhu , Jiaxing Lu and Yihong Chen Ctrip Travel Network Technology Co., Limited. {zhulb, lujx, yihongchen}@Ctrip.com |
| Pseudocode | Yes | Algorithm 1 Solve (23) for all 1 i en |
| Open Source Code | No | The paper does not provide an explicit statement or a link to the open-source code for HDI-Forest. |
| Open Datasets | Yes | We compare various methods on 11 datasets from the UCI repository4. Statistics of these datasets are presented in Table 1. 4http://archive.ics.uci.edu/ml/index.php |
| Dataset Splits | Yes | Each dataset is split in train and test sets according to a 80%-20% scheme, and we report the average performance over 10 random data splits. The hyper-parameters of all tested methods were tuned via 5-fold cross-validation on the training set. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Scikit-learn package' for QRGBDT and provides links to R packages for QRF and QR, and a GitHub link for QD-Ens, but does not provide specific version numbers for these software packages or for any other software used in their own implementation or experimental setup. |
| Experiment Setup | No | The paper states that 'The hyper-parameters of all tested methods were tuned via 5-fold cross-validation on the training set,' but it does not provide the specific hyperparameter values used for HDI-Forest or the baseline methods in the main text. |