Variable Importance Using Decision Trees

Authors: Jalil Kazemitabar, Arash Amini, Adam Bloniarz, Ameet S. Talwalkar

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We further demonstrate the effectiveness of these impurity-based methods via an extensive set of simulations.
Researcher Affiliation Collaboration S. Jalil Kazemitabar UCLA sjalilk@ucla.edu Arash A. Amini UCLA aaamini@ucla.edu Adam Bloniarz UC Berkeley adam@stat.berkeley.edu Now at Google Ameet Talwalkar CMU talwalkar@cmu.edu
Pseudocode Yes Algorithm 1 DSTUMP
Open Source Code No The paper does not provide any specific links to open-source code or explicitly state that code is available.
Open Datasets No The paper states: 'We generate the training data as X = e XM where e X Rn p is a random matrix with IID Unif( 1, 1) entries'. The data is generated, not sourced from a public dataset with an access link or citation.
Dataset Splits No The paper describes generating its own data but does not specify any training, validation, or test dataset splits (e.g., percentages or sample counts).
Hardware Specification No The paper does not provide any specific hardware details such as GPU/CPU models or types of machines used for running experiments.
Software Dependencies No The paper does not specify any software names with version numbers that would be necessary to replicate the experiment.
Experiment Setup Yes We fix p = 200, σ = 0.1, and let βi = 1/ s over its support i S, where |S| = s.