Discriminative-Invariant Representation Learning for Unbiased Recommendation

Authors: Hang Pan, Jiawei Chen, Fuli Feng, Wentao Shi, Junkang Wu, Xiangnan He

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on three real-world datasets, validating the rationality and effectiveness of the proposed framework.
Researcher Affiliation Academia 1University of Science and Technology of China 2Zhejiang University 3Institute of Dataspace
Pseudocode Yes Algorithm 1 DIRL Input: History feedback DB and f(DB); the user set U and the item set I; trade-off parameters β, α1, and α2; learning rate lr; weight decay λ Parameter: The recommendation model s parameter θ; the distribution classifier s parameter ϕ Output: Model predictions of users feedback on items 1: Randomly initialize θ and ϕ 2: while not convergence do...
Open Source Code Yes Code and supplementary materials are available at: https://github.com/Hung Paan/DIRL.
Open Datasets Yes We use three publicly available datasets: Yahoo!R33, Coat4, and Kuai Rand-Pure5, which contain both biased data for training and unbiased data for testing.
Dataset Splits No The paper mentions using biased data for training and unbiased data for testing but does not specify validation splits, exact percentages, or sample counts for any splits in the main text.
Hardware Specification No The paper discusses the experimental process but does not provide any specific details about the hardware used (e.g., GPU/CPU models, memory).
Software Dependencies No The paper mentions using MF as the recommendation model and Adam optimizer, but it does not specify version numbers for any software dependencies or libraries.
Experiment Setup No The paper states that "Supplementary materials describe the experiment details including evaluation metrics, baselines, and hyperparameter settings." indicating that these details are not provided in the main text.