Fairness-aware News Recommendation with Decomposed Adversarial Learning
Authors: Chuhan Wu, Fangzhao Wu, Xiting Wang, Yongfeng Huang, Xing Xie4462-4469
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on benchmark dataset show that our approach can effectively improve fairness in news recommendation with minor performance loss. |
| Researcher Affiliation | Collaboration | 1Department of Electronic Engineering & BNRist, Tsinghua University, Beijing 100084, China 2Microsoft Research Asia, Beijing 100080, China |
| Pseudocode | No | The paper provides a diagram of its architecture and describes the model components mathematically, but it does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include any explicit statement about releasing the source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | The dataset used in our experiments is provided by (Wu et al. 2019d), which contains the news impression logs of users and their gender labels (if available). |
| Dataset Splits | Yes | The logs in the last week are used for test, and the rest are used for model training. In addition, we randomly sample 10% of training logs for validation. We use 80% of users for training the attribute prediction model, 10% for validation and the rest 10% for test. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, memory, or cloud instance types used for running the experiments. |
| Software Dependencies | No | The paper mentions using Adam as the model optimizer and Glove embeddings, but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | Adam (Kingma and Ba 2015) is used as the model optimizer, and the learning rate is 0.001. The dropout (Srivastava et al. 2014) ratio is 0.2. The loss coefficients in Eq. (7) are all set to 0.5. These hyperparameters are tuned on the validation set. |