Deconvolving Feedback Loops in Recommender Systems

Authors: Ayan Sinha, David F. Gleich, Karthik Ramani

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We use this metric on synthetic and real-world datasets to (1) identify the extent to which the recommender system affects the final rating matrix, (2) rank frequently recommended items, and (3) distinguish whether a user s rated item was recommended or an intrinsic preference. We tested our approach for deconvolving feedback loops on synthetic RS, and designed a metric to identify the ratings most affected by the RS. We then use the same automated technique to study real-world ratings data, and find that the metric is able to identify items influenced by a RS.
Researcher Affiliation Academia Ayan Sinha Purdue University sinhayan@mit.edu David F. Gleich Purdue University dgleich@purdue.edu Karthik Ramani Purdue University ramani@purdue.edu
Pseudocode Yes Algorithm 1: Deconvolving Feedback Loops
Open Source Code No The paper does not provide any specific links to source code for the methodology or state that code is available.
Open Datasets Yes Table 1 lists all the datasets we use to validate our approach for deconvolving a RS (from [21, 4, 13]).
Dataset Splits No The paper describes how synthetic data was generated and the overall evaluation process (e.g., ROC curves), but it does not provide specific details on how the real-world datasets were split into training, validation, and test sets for the experiments.
Hardware Specification No The paper does not provide any specific details regarding the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks) used in the experiments.
Experiment Setup Yes In our experiment, we draw au N(3, 1), bu N(0.5, 0.5), tu N(0.1, 1), and ηu,i ϵN(0, 1)... We fix the number of iterative updates to be 10, r to be 10 and the resulting rating matrix is Robs. We use α = 1 in all experiments because it models the case when the recommender effects are strong and thus produces the highest discriminative effect between the observed and true ratings (see Figure 2 f).