VaRT: Variational Regression Trees

Authors: Sebastian Salazar

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the model s performance on 18 datasets and demonstrate its competitiveness with other state-of-the-art methods in regression tasks.
Researcher Affiliation Academia Sebastian Salazar Escobedo Department of Computer Science Columbia University New York, NY 10027 sebastian.salazar@cs.columbia.edu
Pseudocode No The paper describes processes and derivations but does not include a clearly labeled pseudocode or algorithm block.
Open Source Code Yes Code with the random seeds needed to replicate the results of this section are provided with the supplementary material.
Open Datasets Yes We conducted experiments on 18 distinct Datasets from the UCI Machine Learning Repository to benchmark our algorithm.
Dataset Splits No The RMSE values correspond to the average RMSE on the test sets of a random 90/10% train-test split of the data over ten runs. While early stopping by evaluating the RMSE on a training or validation set is mentioned as a possibility, the reported experiments explicitly state a train-test split without detailing a validation split.
Hardware Specification Yes We conducted all experiments on an ASUS Zephyrus G14 laptop with an RTX 2060 Max-Q GPU (6 GB VRAM), a 4900HS AMD CPU, 40 GB of RAM (although RAM usage was kept below 6 GB).
Software Dependencies No Va RT was trained using gradient descent paired with a Clipped Adam optimizer in Py Torch (Paszke et al. [2019], Bingham et al. [2018]). No specific version number for PyTorch or other software dependencies is provided.
Experiment Setup Yes A single Va RT tree for depths 3, 5, 7, and 10 were trained for each dataset... The regularization parameters for all runs was set to 10 3 and no hyperparameter tuning was performed to make a fair comparison on the off-the-shelf performance of each of the algorithms.