Class-Attribute Priors: Adapting Optimization to Heterogeneity and Fairness Objective

Authors: Xuechen Zhang, Mingchen Li, Jiasi Chen, Christos Thrampoulidis, Samet Oymak

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present our experiments in the following way. Firstly, we demonstrate the performance of CAP on both loss function design via bilevel optimization and post-hoc logit adjustment in Sec. 4.1. Sec. 4.2 demonstrates that CAP provides noticeable improvements for fairness objectives beyond balanced accuracy. Then Sec. 4.3 discusses the advantage of utilizing attributes and how CAP leverages them in noisy, long-tailed datasets through perturbation experiments.
Researcher Affiliation Academia Xuechen Zhang1, Mingchen Li2, Jiasi Chen2, Christos Thrampoulidis3, Samet Oymak2 1University of California, Riverside 2University of Michigan, Ann Arbor 3University of British Columbia
Pseudocode No The paper describes its methods conceptually and in prose within sections such as '3 Our Approach: Class-attribute Priors (CAP)' and '3.3 Class-specific Learning Strategies: Bilevel Optimization and Post-hoc optimization', but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps in a code-like format.
Open Source Code No The paper does not contain any statements about releasing source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes Dataset. In line with previous research (Menon et al. 2020; Ye et al. 2020; Li et al. 2021), we conduct the experiments on CIFAR-LT and Image Net-LT datasets.
Dataset Splits Yes Following the implementation of (Li et al. 2021), we split the training data to 80% training and 20% validation to optimize Ltrain and Lval.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. It only mentions general models like 'Res Net-32' and 'Res Net-50' for the neural network architecture.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., Python version, library versions like PyTorch, TensorFlow, or scikit-learn). It mentions neural network architectures like ResNet-32 and ResNet-50, but not the software environment.
Experiment Setup Yes During the search phase for bilevel CAP, we split the training set into 80% training and 20% validation to obtain the optimal loss function design... Additionally, all CIFAR-LT experiments use Res Net-32 (He et al. 2016), and Image Net-LT experiments use Res Net-50.