SGLB: Stochastic Gradient Langevin Boosting
Authors: Aleksei Ustimenko, Liudmila Prokhorenkova
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We also empirically show that SGLB outperforms classic gradient boosting when applied to classification tasks with 0-1 loss function, which is known to be multimodal. and Our experiments on synthetic and real datasets show that SGLB outperforms standard SGB and can optimize globally such non-convex losses as 0-1 loss... |
| Researcher Affiliation | Collaboration | 1Yandex, Moscow, Russia 2Moscow Institute of Physics and Technology, Moscow, Russia 3HSE University, Moscow, Russia. |
| Pseudocode | Yes | Algorithm 1 SGB and Algorithm 2 SGLB |
| Open Source Code | Yes | The proposed algorithm is implemented within the Cat Boost open-source gradient boosting library (option langevin=True) (Cat Boost, 2020). and Our implementation of SGLB is available within the open-source Cat Boost gradient boosting library. |
| Open Datasets | Yes | The datasets are described in Table 1 of the supplementary materials. |
| Dataset Splits | Yes | We split each dataset into train, validation, and test sets in proportion 65/15/20. |
| Hardware Specification | No | The paper does not specify the hardware used for the experiments. |
| Software Dependencies | No | Our implementation of SGLB is available within the open-source Cat Boost gradient boosting library. (No version specified for CatBoost or other dependencies). |
| Experiment Setup | Yes | We set learning rate to 0.1 and ς = 10 1 for SLA. For SGLB, we set β = 103 and γ = 10 3. Moreover, we set the subsampling rate of SGB to 0.5. and For all algorithms, the maximal number of trees is set to 1000. |