Label-Imbalanced and Group-Sensitive Classification under Overparameterization
Authors: Ganesh Ramachandra Kini, Orestis Paraskevas, Samet Oymak, Christos Thrampoulidis
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Importantly, our experiments on state-of-the-art datasets are fully consistent with our theoretical insights and confirm the superior performance of our algorithms. |
| Researcher Affiliation | Academia | Ganesh Ramachandra Kini University of California, Santa Barbara kini@ucsb.edu Orestis Paraskevas University of California, Santa Barbara orestis@ucsb.edu Samet Oymak University of California, Riverside oymak@ece.ucr.edu Christos Thrampoulidis University of British Columbia cthrampo@ece.ubc.ca |
| Pseudocode | No | The paper does not contain any pseudocode or explicitly labeled algorithm blocks. |
| Open Source Code | Yes | [1] Code for paper: Label-imbalanced and group-sensitive classification under overparameterization. https://github.com/orparask/VS-Loss. |
| Open Datasets | Yes | Table 1 evaluates LA/CDT/VS-losses on imbalanced instances of CIFAR-10/100... We consider the Waterbirds dataset [45]. |
| Dataset Splits | Yes | For consistency with [17, 8, 32, 54] we keep a balanced test set and in addition to evaluating our models on it, we treat it as our validation set and use it to tune our hyperparameters. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We set an imbalance ratio Nmax/Nmin = 100... For consistency, we follow the training setting in [8]... We use a grid to pick the best τ / γ / (τ,γ)-pair for the LA / CDT / VS losses... In Fig. 4(c1,c3) we trained for 200 epochs, while in Fig. 4(c2,c4) we trained for 300 epochs... For γ = 0.15... We did not fine-tune γ as the heuristic choice already shows the benefit of Group-VS-loss. |