Fairness-aware Contrastive Learning with Partially Annotated Sensitive Attributes
Authors: Fengda Zhang, Kun Kuang, Long Chen, Yuxuan Liu, Chao Wu, Jun Xiao
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results illustrate the effectiveness of our method in terms of fairness and utility, even with very limited sensitive attributes and serious data bias. |
| Researcher Affiliation | Collaboration | Fengda Zhang1, Kun Kuang1,2 , Long Chen3, Yuxuan Liu1, Chao Wu1, Jun Xiao1 1Zhejiang University, 2Key Laboratory for Corneal Diseases Research of Zhejiang Province 3The Hong Kong University of Science and Technology |
| Pseudocode | Yes | Algorithm 1 Semi-supervised Algorithm for Learning Classifier and Generator |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | We validate our method on the following datasets: 1) Celeb A (Liu et al., 2018) is a dataset with over 200k facial images... 2) UTK-Face (Zhang et al., 2017) contains over 20k facial images... Dogs and Cats (dog, 2013) |
| Dataset Splits | No | The paper discusses training and testing sets, but does not explicitly describe a separate validation set split (e.g., percentages or counts for validation data). |
| Hardware Specification | No | The paper mentions running experiments but does not provide specific details on hardware components such as GPU models, CPU types, or memory. |
| Software Dependencies | No | The paper mentions using specific models and architectures like '5-layer CNN' and 'Res Net-18', but does not specify version numbers for any software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used. |
| Experiment Setup | Yes | We resize the images of Celeb A and UTK-Face to 128 128, and use a 5-layer CNN (Krizhevsky et al., 2017) as the encoder of generative model. Besides, the decoder also has 5 layers... We use the Res Net-18 (He et al., 2016) as encoder model and a MLP as projection head, and train them via weighted fairness-aware contrastive loss for 100 epochs. Afterwards, we train a linear classifier on top of the frozen representation given by encoder F( ) for 10 epochs on the training dataset. |