Fair Representation Learning for Recommendation: A Mutual Information Perspective

Authors: Chen Zhao, Le Wu, Pengyang Shao, Kun Zhang, Richang Hong, Meng Wang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments over two real-world datasets demonstrate the effectiveness of our proposed Fair MI in reducing unfairness and improving recommendation accuracy simultaneously.
Researcher Affiliation Academia 1 School of Computer Science and Information Engineering, Hefei University of Technology 2 Hefei Comprehensive National Science Center
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks (e.g., a clearly labeled 'Algorithm' section or code-like formatted procedures).
Open Source Code No The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper.
Open Datasets Yes We conduct experiments on two datasets: Movie Lens-1M (Harper and Konstan 2015) and Lastfm-360K (Celma Herrada et al. 2009).
Dataset Splits Yes On Movie Lens-1M, We split the historical records into training set and test set with the ratio of 8:2, and 10% of the test set is used as validation.
Hardware Specification Yes The experiments are implemented with Pytorch-1.7.0 on 1 NVIDIA TITAN-RTX GPU.
Software Dependencies Yes The experiments are implemented with Pytorch-1.7.0 on 1 NVIDIA TITAN-RTX GPU.
Experiment Setup Yes We set the embedding size as D = 64, the mini-batch size is set to 2048 for Movielens-1M and 4096 for Lastfm-360K; and choose the Adam optimizer with the initial learning rate equaling 0.001.