The Effect of the Intrinsic Dimension on the Generalization of Quadratic Classifiers

Authors: Fabian Latorre, Leello Tadesse Dadi, Paul Rolland, Volkan Cevher

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments. We illustrate our theoretical results on synthetic data. We show how the isotropy of the input distribution plays a major role in the generalization properties of quadratic classifiers. As the dimension increases and the sample size remains proportional to it, we observe a constant generalization gap for the nuclear-norm constrained classifier. In contrast, for SVMs, the gap grows at a predicted d rate. In the case of anisotropic distributions, we observe similar performance for both regularization schemes.
Researcher Affiliation Academia Fabian Latorre, Leello Dadi, Paul Rolland and Volkan Cevher Laboratory for Information and Inference Systems, EPFL, Switzerland.
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes]
Open Datasets No The paper describes generating synthetic data ('Generating Isotropic Data', 'Generating Anistropic Data') rather than using an existing public dataset with concrete access information.
Dataset Splits No The paper mentions 'ntrain samples' and 'ntest samples' but does not specify exact split percentages, absolute sample counts, or reference predefined splits for reproducibility.
Hardware Specification No Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No]
Software Dependencies No The paper mentions software like JAX and scikit-learn but does not provide specific version numbers for reproducibility.
Experiment Setup Yes We set the radius λ = 1 for both Nuclear and Frobenius norm constrained classifiers.