Demystifying the Optimal Performance of Multi-Class Classification

Authors: Minoh Jeong, Martina Cardone, Alex Dytso

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we validate the effectiveness of our theoretical results via experiments both on synthetic data under various noise settings and on real data.
Researcher Affiliation Collaboration Minoh Jeong Electrical and Computer Engineering University of Minnesota Minneapolis, MN 55455 jeong316@umn.edu Martina Cardone Electrical and Computer Engineering University of Minnesota Minneapolis, MN 55455 mcardone@umn.edu Alex Dytso Qualcomm Flarion Technology, Inc. Bridgewater, NJ 08807 odytso2@gmail.com
Pseudocode No The paper describes methods using mathematical definitions and theorems, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper cites third-party codebases for models used (e.g., "Vi T-Py Torch, 2020. https://github.com/jeonsworld/Vi T-pytorch," and "Py Torch_CIFAR10, 2021. https://github.com/huyvnphan/Py Torch_CIFAR10,"), but does not provide concrete access to the source code for the methodology described in this paper.
Open Datasets Yes We empirically validate our results using various datasets, namely: 1) a synthetic dataset with different noises, including one-hot labels; 2) two benchmark datasets CIFAR-10H [4] and Fashion-MNIST-H [46]; and 3) Movie Lens [38], a real-world dataset for movie recommendations.
Dataset Splits No The paper mentions using "synthetic data" and "benchmark datasets CIFAR-10H and Fashion-MNIST-H", which are variations of existing datasets, but does not explicitly provide specific training/validation/test dataset splits (e.g., percentages or sample counts) for their experiments.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used to run its experiments.
Software Dependencies No The paper mentions 'Py Torch' in relation to third-party implementations used for comparison models, but does not provide specific version numbers for software dependencies of its own methodology (e.g., 'PyTorch 1.9').
Experiment Setup Yes Mo BK(ψC) uses K = n (with this choice, Theorem 2 ensures the asymptotic normality of Mo BK), the Euclidean distance for d, and r = 1/5. For different n, the parameters of Mo BK(ψC) are chosen as K = n, d is the Euclidean distance, and r = 1/5. We iterate the experiment 50 times for each n. We consider a 4-class classification problem with equiprobable classes C Cµ := {(µ, µ), ( µ, µ), ( µ, µ), (µ, µ)}, where µ > 0 is a parameter that controls the classification hardness. We generate the feature X R2 according to a 2-dimensional Gaussian distribution with mean c (i.e., a realization of C Cµ) and covariance matrix I2.