Fairness-Aware Neural Rényi Minimization for Continuous Features

Authors: Vincent Grari, Sylvain Lamprier, Marcin Detyniecki

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically assess and compare our approach and demonstrate significant improvements on previously presented work in the field. 5 Empirical Results
Researcher Affiliation Collaboration 1Sorbonne Universit e, LIP6/CNRS, Paris, France 2AXA, REV Research, Paris, France 3 Polish Academy of Science, IBS PAN, Warsaw, Poland
Pseudocode Yes Algorithm 1 HGR Estimation by Neural Network; Algorithm 2 Fair HGR NN for Demographic Parity
Open Source Code No The paper provides a link (https://github.com/criteo-research/continuous-fairness) for a related work (Mary2019) but does not provide a link to the authors' own source code for the methodology described in this paper.
Open Datasets Yes The US Census demographic data set is an extraction of the 2015 American Community Survey; The Motor Insurance dataset originates from a pricing game organized by The French Institute of Actuaries in 2015; The Crime dataset is obtained from the UCI Machine Learning Repository [Dua and Graff, 2017]
Dataset Splits No The paper specifies training and test splits but does not explicitly provide a separate validation dataset split with specific percentages or counts. It mentions obtaining hyperparameters by grid search in five-fold cross validation, implying an internal validation process, but not a distinct dataset split for reproduction.
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions software components like 'Tanh activation functions', 'Dropout', 'Xavier initialization', and 'MSE' loss, but does not provide specific version numbers for any software or libraries, such as Python, PyTorch, or TensorFlow.
Experiment Setup No The paper mentions hyperparameter ranges ('number of layers between 3 and 5 and between 8 and 32 for the number of units') and that specific hyperparameters (λ) were used for a synthetic scenario. However, it does not explicitly list the concrete hyperparameter values used for the main real-world experiments (e.g., specific learning rates, batch sizes, or exact architectures for models h, f, g beyond ranges) in a reproducible manner.