DiPS: Differentiable Policy for Sketching in Recommender Systems

Authors: Aritra Ghosh, Saayan Mitra, Andrew Lan6703-6712

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We verify the effectiveness of Di PS on real-world datasets under various practical settings and show that it requires up to 50% fewer sketch items to reach the same predictive quality than existing sketching policies.
Researcher Affiliation Collaboration Aritra Ghosh1, Saayan Mitra2, Andrew Lan1 1University of Massachusetts Amherst 2Adobe Research
Pseudocode Yes Algorithm 1: Training of Di PS
Open Source Code Yes Our implementation will be publicly available at https://github.com/arghosh/Di PS.
Open Datasets Yes We use five publicly available benchmark datasets: the Movielens 1M2 and 10M 3 datasets (Harper and Konstan 2015) and the Netflix Prize dataset 4 for explicit RSs and the Amazon Book5 and Foursquare6 datasets for implicit RSs.
Dataset Splits Yes We randomly split 60-20-20% of the users in the datasets into training-validation-testing sets.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions that 'Model details and parameter settings can be found in the supplementary material' but does not list specific software dependencies with version numbers in the main text.
Experiment Setup No The paper states, 'Model details and parameter settings can be found in the supplementary material,' indicating that specific experimental setup details such as hyperparameters are not provided in the main text.