How Bad is Top-$K$ Recommendation under Competing Content Creators?

Authors: Fan Yao, Chuanhao Li, Denis Nekipelov, Hongning Wang, Haifeng Xu

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive synthetic and real-world data based simulations also support these theoretical findings. To confirm our theoretical findings and also to empirically measure the social welfare induced by creators competition, we conduct simulations on game instances G({Si}n i=1, X, σ, β, K) constructed from two synthetic datasets and the Movie Lens-1m dataset
Researcher Affiliation Academia 1Department of Computer Science, University of Virginia, USA 2Department of Economics, University of Virginia, USA 3Department of Computer Science, University of Chicago, USA.
Pseudocode Yes Algorithm 1 Simulated Annealing for Computing the Globally Optimal Welfare, Algorithm 2 Best-response Search for the Globally Optimal Welfare, Algorithm 3 Exp3 for player-i
Open Source Code No The paper does not contain an explicit statement about the release of its source code or a link to a code repository for its methodology.
Open Datasets Yes Movie Lens-1m dataset (Harper & Konstan, 2015)
Dataset Splits No The paper describes the Movie Lens-1m dataset and its use for training embeddings, including "5-fold cross-validation" for validating representation quality. However, it does not specify explicit training, validation, and test splits for the primary game simulation experiments.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments.
Software Dependencies No The paper mentions algorithms and frameworks like "deep matrix factorization" and "Exp3" and tools for solving LPs (CPLEX), but does not specify software dependencies with version numbers (e.g., "Python 3.x", "PyTorch 1.x").
Experiment Setup Yes For SA, we set T = 5000 and the temperature schedule τt = 0.1/sqrt(t); for BRS, we set T = max{30, 2n} and take the best output from 5 independent runs. Unless specified, we always use a fixed value (η, ϵ) = (0.1, 0.1) in our experiments. We fix (β, K, T) = (0.1, 5, 1000) and report the averaged social welfare over T rounds