Online Platforms and the Fair Exposure Problem under Homophily

Authors: Jakob Schoeffer, Alexander Ritchie, Keziah Naggita, Faidra Monachou, Jessica Finocchiaro, Marc Juarez

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We supplement our theoretical results with empirical results to gain additional insights by estimating our model parameters from real-world datasets collected from Twitter and Facebook (Garimella et al. 2017; Bakshy, Messing, and Adamic 2015). Moreover, we measure the price of fairness, i.e., the difference in the platform s utility between the fairness-aware and the fairness-agnostic settings. Using parameters estimated from Bakshy, Messing, and Adamic (2015), we observe an optimal fairness-aware solution that heavily favors one group.
Researcher Affiliation Academia 1Karlsruhe Institute of Technology (KIT) 2University of Michigan 3Toyota Technological Institute at Chicago 4Harvard University 5Center for Research on Computation and Society (CRCS) 6University of Edinburgh
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes See https://arxiv.org/abs/2202.09727 for the full paper including the appendix and https://github.com/jfinocchiaro/fair-exposure for all code.
Open Datasets Yes We supplement our theoretical results with empirical results to gain additional insights by estimating our model parameters from real-world datasets collected from Twitter and Facebook (Garimella et al. 2017; Bakshy, Messing, and Adamic 2015).
Dataset Splits No The paper uses real-world datasets and estimates parameters but does not specify training, validation, or test splits. It references
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup No The paper mentions using "maximum likelihood estimation to fit parameter values" but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rates, batch sizes, optimizers), model initialization, or training schedules.