Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Generalized Delayed Feedback Model with Post-Click Information in Recommender Systems

Authors: Jiaqi Yang, De-Chuan Zhan

NeurIPS 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our analysis on public datasets, and experimental performance confirms the effectiveness of our method.
Researcher Affiliation Academia Jia-Qi Yang De-Chuan Zhan State Key Laboratory for Novel Software Technology Nanjing University, Nanjing, 210023, China EMAIL, EMAIL
Pseudocode Yes Algorithm 1 Streaming training of GDFM; Algorithm 2 Estimating wj
Open Source Code Yes code available at https://github.com/ThyrixYang/gdfm_nips22
Open Datasets Yes Criteo Conversion Logs^2 is collected from an online display advertising service... (https://labs.criteo.com/2013/12/conversion-logs-dataset/) and Taobao User Behavior^3 is a subset of user behaviors on Taobao... (https://tianchi.aliyun.com/dataset/dataDetail?dataId=649&userId=1&lang=en-us)
Dataset Splits Yes The datasets are split into pretraining and streaming datasets.
Hardware Specification No The paper's checklist indicates that details regarding the total amount of compute and type of resources used are in the supplementary material ('[Yes] Details in supplementary'), meaning they are not explicitly described within the main paper's text.
Software Dependencies No The paper mentions general aspects of implementation ('We use the same architecture for all the methods to ensure a fair comparison. All the methods are carefully tuned. We use α = 2, β = 1, λ = 0.01, lr = 10^-3 for GDFM.') but does not specify particular software libraries or frameworks with their version numbers (e.g., PyTorch 1.x, TensorFlow 2.x, Python 3.x).
Experiment Setup Yes All the methods are carefully tuned. We use α = 2, β = 1, λ = 0.01, lr = 10^-3 for GDFM. The network structure and procedure to calculate the proxy feedback loss Eq. (4) used by GDFM is depicted in Figure. 1 (b).