Learning to Suggest Breaks: Sustainable Optimization of Long-Term User Engagement
Authors: Eden Saig, Nir Rosenfeld
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we provide an empirical evaluation of our approach on semi-synthetic data. |
| Researcher Affiliation | Academia | 1Department of Computer Science, Technion Israel Institute of Technology, Haifa, Israel. Correspondence to: Eden Saig <edens@cs.technion.ac.il>, Nir Rosenfeld <nirr@cs.technion.ac.il>. |
| Pseudocode | Yes | Algorithm 1 Sample from SLV(p; u) and Algorithm 2 Adaptive policy optimization using sparse rating signals are provided in the appendix. |
| Open Source Code | Yes | Code is available at: https: //github.com/edensaig/suggest-breaks. |
| Open Datasets | Yes | The Movie Lens 1M dataset (Harper & Konstan, 2015) includes 1,000,209 ratings provided by 6,040 users and for 3,706 items... The dataset is publicly available at: https://grouplens.org/datasets/movielens/1m/. |
| Dataset Splits | No | The remaining 70% data points were used for training and testing. For these, we first randomly sampled 1,000 users to form the test set. Then, the remaining users were partitioned into the main train set S, which included 70% ( 3,528 for ML1M, 28,652 for Goodreads) of these users, and the experimental treatment sets D(j), each including 10% ( 504 for ML1M, 4,093 for Goodreads) users for N = 3. The paper defines training and test sets and experimental treatment sets but does not explicitly use the term "validation set" with its own specific split. |
| Hardware Specification | Yes | Hardware: All experiments were run on a single laptop, with 16GB of RAM, M1 Pro processor, and with no GPU support. |
| Software Dependencies | No | The paper mentions specific software packages like "SURPRISE package", "SCIKIT-LEARN", and "SCIPY.OPTIMIZE" but does not provide version numbers for these dependencies. |
| Experiment Setup | Yes | Softmax temperature was set to 0.5. We set α = 0.065, and chose γ = 0.02, δ = 0.001 (which together determine scale) so that typical values for engagement rate 1 T |Su| are on the order of 10 for the chosen T = 100. |