Lexicographic and Depth-Sensitive Margins in Homogeneous and Non-Homogeneous Deep Models
Authors: Mor Shpigel Nacson, Suriya Gunasekar, Jason Lee, Nathan Srebro, Daniel Soudry
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this work we similarly investigate the connection between margin maximization and the limits of The optimization path of unconstrained, unregularized gradient descent. The constrained path , where we optimize with a diverging (increasingly loose) constraint on the norm of the parameters. The closely related regularization path , of solutions with decreasing penalties on the norm. We examine overparameterized realizable problems (i.e., where it is possible to perfectly classify the training data), when training using monotone decreasing classification loss functions. |
| Researcher Affiliation | Academia | 1Technion, Israel 2TTI Chicago, USA 3USC Los Angeles, USA. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating the availability of open-source code for the described methodology. |
| Open Datasets | No | The paper focuses on theoretical analysis and does not use or provide access information for any specific publicly available training datasets. |
| Dataset Splits | No | The paper is theoretical and does not involve data splits for training, validation, or testing. |
| Hardware Specification | No | The paper is theoretical and does not describe computational experiments, therefore no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not describe any computational implementation details or software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not provide details about an experimental setup, such as hyperparameters or system-level training settings. |