FairLISA: Fair User Modeling with Limited Sensitive Attributes Information

Authors: zheng zhang, Qi Liu, Hao Jiang, Fei Wang, Yan Zhuang, Le Wu, Weibo Gao, Enhong Chen

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments on representative user modeling tasks including recommender system and cognitive diagnosis. The results demonstrate that our Fair LISA can effectively improve fairness while retaining high accuracy in scenarios with different ratios of missing sensitive attributes.
Researcher Affiliation Academia 1: Anhui Province Key Laboratory of Big Data Analysis and Application University of Science and Technology of China 2: State Key Laboratory of Cognitive Intelligence 3: Hefei University of Technology
Pseudocode Yes The whole optimization algorithm for Fair LISA is shown in Appendix B. (Appendix B contains "Algorithm 1: The Fair LISA framework")
Open Source Code Yes The code is released at https://github.com/zhengz99/Fair LISA.
Open Datasets Yes Specifically, we use PISA20152 for cognitive diagnosis and Movie Lens-1M3 for a recommender system. Detailed descriptions can be found in Appendix C.2. Both of these datasets are publicly available and do not contain any personally identifiable information.
Dataset Splits Yes In our experiments, we split each dataset into training, validation, and testing sets, with a ratio of 7:1:2.
Hardware Specification Yes We implement all models with Py Torch and conduct all experiments on four 2.0GHz Intel Xeon E5-2620 CPUs and a Tesla K20m GPU.
Software Dependencies No The paper mentions "Py Torch" but does not provide a specific version number for this or any other software dependency.
Experiment Setup Yes For all datasets and models, we set the learning rate to 0.001 and the dropout rate to 0.2. ... We set adversarial coefficient λ1 = 1, λ2 = 2, λ3 = 1 for cognitive diagnosis, and λ1 = 1, λ2 = 20, λ3 = 10 for recommendation tasks. For the adversarial architecture, the filter module is two-layer perceptrons and we use Re LU as the activation function. The discriminators and attackers are three-layer perceptrons with the activation function of Leaky Re LU. We set the dropout rate to 0.1 and the slope of the negative section for Leaky Re LU to 0.2 for them.