Unbiased Supervised Contrastive Learning
Authors: Carlo Alberto Barbano, Benoit Dufumier, Enzo Tartaglione, Marco Grangetto, Pietro Gori
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate the proposed losses on standard vision datasets including CIFAR10, CIFAR100, and Image Net, and we assess the debiasing capability of Fair KL with ϵ-Sup Info NCE, reaching stateof-the-art performance on a number of biased datasets, including real instances of biases in the wild. |
| Researcher Affiliation | Academia | Carlo Alberto Barbano University of Turin LTCI, Tel ecom Paris, IP Paris Benoit Dufumier LTCI, Tel ecom Paris, IP Paris Enzo Tartaglione LTCI, Tel ecom Paris, IP Paris Marco Grangetto University of Turin Pietro Gori LTCI, Tel ecom Paris, IP Paris |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The code can be found at https://github.com/EIDOSLAB/unbiased-contrastive-learning. |
| Open Datasets | Yes | We validate the proposed losses on standard vision datasets including CIFAR10, CIFAR100, and Image Net... employing Biased MNIST (Bahng et al., 2020), Corrupted-CIFAR10 (Hendrycks & Dietterich, 2019), b FFHQ (Lee et al., 2021), 9-Class Image Net (Ilyas et al., 2019) and Image Net-A (Hendrycks et al., 2021). |
| Dataset Splits | No | The paper mentions 'unbiased test set' for Biased MNIST and evaluation on 'test set' for other datasets, but does not explicitly detail training/validation/test splits or the specific use of a validation set. |
| Hardware Specification | Yes | All of our experiments were run using Py Torch 1.10.0. We used a cluster with 4 NVIDIA V100 GPUs and a cluster of 8 NVIDIA A40 GPUs. |
| Software Dependencies | Yes | All of our experiments were run using Py Torch 1.10.0. |
| Experiment Setup | Yes | We use the original setup from Sup Con (Khosla et al., 2020), employing a Res Net-50, a large batch size (1024), a learning rate of 0.5, a temperature of 0.1, and multiview augmentation, for CIFAR-10 and CIFAR-100. We use SGD as optimizer with a momentum of 0.9, and train for 1000 epochs. Learning rate is decayed with a cosine policy with warmup from 0.01, with 10 warmup epochs. |