Narrow Margins: Classification, Margins and Fat Tails
Authors: Francois Buet-Golfouse
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | The key results of our work are the (partial) answer to the open question and conjecture in (Rosset et al., 2003), on the one hand, and the link between the non convergence to a margin maximising classifier and regular variation (cf. (Bingham et al., 1987)) of the loss function, on the other hand. While margin maximisation is quite specific to the linear setting, deriving analytical properties of loss functions that are also used in other settings, such as deep learning, is particularly interesting to understand choices for loss functions and their implications. |
| Researcher Affiliation | Academia | 1Department of Mathematics, University College London, London, United Kingdom. Correspondence to: Francois Buet Golfouse <ucahfbu@ucl.ac.uk>. |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | No concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper was provided. |
| Open Datasets | No | This paper is theoretical research and does not involve empirical studies with datasets for training. |
| Dataset Splits | No | This paper is theoretical research and does not provide dataset split information for training, validation, or testing. |
| Hardware Specification | No | No specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments were found. |
| Software Dependencies | No | No specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate experiments were found. |
| Experiment Setup | No | This paper is theoretical research and does not contain specific experimental setup details (concrete hyperparameter values, training configurations, or system-level settings). |