Exacerbating Algorithmic Bias through Fairness Attacks
Authors: Ninareh Mehrabi, Muhammad Naveed, Fred Morstatter, Aram Galstyan8930-8938
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments that indicate the effectiveness of our proposed attacks. Through experimentation on three different datasets with different fairness measures and definitions, we show the effectiveness of our attacks in achieving the desired goal of affecting fairness. |
| Researcher Affiliation | Academia | 1University of Southern California 2Information Sciences Institute {ninarehm, mnaveed}@usc.edu, {fredmors, galstyan}@isi.edu |
| Pseudocode | Yes | Algorithm 1: Influence Attack on Fairness; Algorithm 2: Anchoring Attack |
| Open Source Code | Yes | 1https://github.com/Ninarehm/attack |
| Open Datasets | Yes | German Credit Dataset. This dataset comes from UCI machine learning repository (Dua and Graff 2017). COMPAS Dataset. Propublica s COMPAS dataset contains information about defendants from Broward County 2. Drug Consumption Dataset. This dataset comes from the UCI machine learning repository (Dua and Graff 2017). |
| Dataset Splits | No | The data was split into an 80-20 train and test split. No explicit validation set or split information is provided. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | In our experiments we set λ = 1. In our experiments we set τ = 0. Hinge loss was used to control for accuracy for all the methods in our experiments as in (Koh, Steinhardt, and Liang 2018). |