Neural Collapse for Cross-entropy Class-Imbalanced Learning with Unconstrained ReLU Features Model

Authors: Hien Dang, Tho Tran Huu, Tan Minh Nguyen, Nhat Ho

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically validate our results through experiments on practical architectures and dataset.
Researcher Affiliation Collaboration 1Department of Statistics and Data Sciences, University of Texas at Austin, USA 2FPT Software AI Center, Vietnam 3Department of Mathematics, National University of Singapore, Singapore.
Pseudocode No The paper describes the methodology in mathematical and textual form, but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about releasing source code for the methodology described, nor does it provide a link to a code repository.
Open Datasets Yes We train these models on the imbalanced subsets of 4 datasets: MNIST, Fashion MNIST, CIFAR10, and CIFAR100.
Dataset Splits No For this experiment, a subset of the CIFAR10 dataset with {1000, 1000, 2000, 2000, 3000, 3000, 4000, 4000, 5000, 5000} random samples per class is utilized as training data. The paper specifies training data sizes but does not provide details on how the dataset was split into training, validation, and test sets.
Hardware Specification No The paper mentions using specific model architectures (MLP, VGG11, ResNet18) but does not provide any details regarding the hardware specifications (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions the use of Adam optimizer and ReLU activation, but it does not list any specific software or library versions (e.g., Python, PyTorch, TensorFlow, CUDA) that would be needed for reproducibility.
Experiment Setup Yes We train each backbone model with Adam optimizer with batch size 256, the weight decay is λW = 1e-4. Feature decay λH is set to 1e-5 for MLP and VGG11, and to 1e-4 for ResNet18.