A Convergence Rate Analysis for LogitBoost, MART and Their Variant

Authors: Peng Sun, Tong Zhang, Jie Zhou

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results on UCI datasets support our analysis. In Section 5 we present empirical results on UCI datasets to support our claims. For completeness, we perform a similar experiment here, focusing on binary classification and adding GBoost in our comparisons. In the supplement, we plot the convergence for GBoost, MART and Logit Boost on datasets #1 to #5. Figure 3 shows the typical convergence pattern.
Researcher Affiliation Collaboration 1Tsinghua National Laboratory for Information Science and Technology(TNList), Department of Automation, Tsinghua University, Beijing 100084, China 2Baidu Inc., Beijing, China and Department of Statistics, Rutgers University, NJ, USA
Pseudocode Yes The corresponding pseudo-code in given in Algorithm 1. Algorithm 1 Logit Boost, MART and Variant. input {xi, yi}N i=1: the training set; ν: the shrinkage factor; T: the maximum number of iterations. output the Additive Tree Model F = F(x).
Open Source Code No The paper does not provide any explicit statements about releasing source code for the described methodology or links to a code repository.
Open Datasets Yes In this section we present empirical results on five binary datasets: #1: optdigits05, #2: pendigits49, #3: zipcode38, #4: letter01, #5: mnist10k05. These are synthesized from the corresponding UCI datasets; e.g., optdigits05 means we pick class 0 and class 5 from the multiclass datasets optdigits .
Dataset Splits No The paper mentions "training data" and discusses loss decrease on training data, but it does not provide specific details on training, validation, and test splits (e.g., percentages, sample counts, or cross-validation setup).
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, or cloud resources).
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, or specific library versions).
Experiment Setup Yes In all of the following experiments we set ν = 0.1, J = 8 and the clapping ρ = 0.05.