Censoring Representations with an Adversary
Authors: Harrison Edwards, Amos Storkey
ICLR 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate this adversarial approach on two problems: making decisions free from discrimination and removing private information from images... We demonstrate the ability to provide discriminant free representations for standard test problems, and compare with previous state of the art methods for fairness, showing statistically significant improvement across most cases... We evaluate ALFR on two datasets, Diabetes and Adult. |
| Researcher Affiliation | Academia | Harrison Edwards & Amos Storkey Department of Informatics University of Edinburgh Edinburgh, UK, EH8 9AB H.L.Edwards@sms.ed.ac.uk, A.Storkey@ed.ac.uk |
| Pseudocode | Yes | Algorithm 1 Strictly alternating gradient steps. |
| Open Source Code | No | The paper does not include an unambiguous sentence stating that the authors are releasing the code for the work described in this paper, nor does it provide a direct link to a source-code repository. |
| Open Datasets | Yes | We used two datasets from the UCI repository Lichman (2013) to demonstrate the efficacy of ALFR. The Adult dataset consists of census data and the task is to predict whether a person makes over 50K dollars per year. The sensitive attribute we chose was Gender... The Diabetes dataset consists of hospital data from the US and the task is to predict whether a patient will be readmitted to hospital. |
| Dataset Splits | Yes | The Adult dataset... We used 35 thousand instances for the training set and approximately 5 thousand instances each for the validation and test sets. ... The Diabetes dataset... We used 80 thousand instances for the training set and approximately 10 thousand instances each for the validation and test sets. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions software like Theano and Lasagne but does not provide specific version numbers for these or other key software components. |
| Experiment Setup | Yes | The autoencoder in ALFR had U ({1, . . . , 3}) encoding/decoding layers with U ({1, . . . , 100}) hidden units, with all hidden layers having the same number of hidden units. Each encoding/decoding unit used the Re LU (Nair & Hinton (2010)) activation. The critic also had U ({1, . . . , 3}) hidden layers with Re LU activations. The predictor network was simply a logistic regressor on top of the central hidden layer of the autoencoder. In the LFR model the model had U ({5, . . . , 50}) clusters. In both models the reconstruction error weighting parameter α was fixed at 0.05. For both models we used had β U[0, 50] and γ U[0, 10]. |