Directional Bias Amplification
Authors: Angelina Wang, Olga Russakovsky
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To verify that the above shortcomings manifest in practical settings, we revisit the analysis of Zhao et al. (2017) on the COCO (Lin et al., 2014) image dataset with two disjoint protected groups Awoman and Aman, and 66 binary target tasks, Tt, corresponding to the presence of 66 objects in the images. |
| Researcher Affiliation | Academia | Angelina Wang and Olga Russakovsky Princeton University |
| Pseudocode | No | The paper describes its metric and concepts mathematically and textually, but it does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is located at https: //github.com/princetonvisualai/directional-bias-amp. |
| Open Datasets | Yes | To verify that the above shortcomings manifest in practical settings, we revisit the analysis of Zhao et al. (2017) on the COCO (Lin et al., 2014) image dataset... We look at the facial image domain of Celeb A (Liu et al., 2015)... |
| Dataset Splits | Yes | When a threshold is needed in our experiments, we pick it to be wellcalibrated on the validation set. In other words, we estimate the expected proportion p of positive labels from the train ing set and choose a threshold such that on N validation examples, the Np highest-scoring are predicted positive. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using specific model architectures (e.g., VGG16, ResNet18, AlexNet) and loss functions (Binary Cross Entropy Loss), but does not list specific software libraries or their version numbers. |
| Experiment Setup | No | The main text states 'Training details are in Appendix A.2.' but does not provide specific hyperparameter values or training configurations directly within the main text. |