Scalable Demand-Aware Recommendation

Authors: Jinfeng Yi, Cho-Jui Hsieh, Kush R. Varshney, Lijun Zhang, Yao Li

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We first conduct experiments with simulated data to verify that the proposed demand-aware recommendation algorithm is computationally efficient and robust to noise. In the real-world experiments, we evaluate the proposed demand-aware recommendation algorithm by comparing it with the six state-of the-art recommendation methods: (a) M3F, maximum-margin matrix factorization [24], (b) PMF, probabilistic matrix factorization [25], (c) WR-MF, weighted regularized matrix factorization [14], (d) CP-APR, Candecomp-Parafac alternating Poisson regression [7], (e) Rubik, knowledge-guided tensor factorization and completion method [30], and (f) BPTF, Bayesian probabilistic tensor factorization [31].
Researcher Affiliation Collaboration Jinfeng Yi1 , Cho-Jui Hsieh2, Kush R. Varshney1, Lijun Zhang3, Yao Li2 1IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA 2University of California, Davis, CA, USA 3National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
Pseudocode No The paper describes the optimization algorithm in text but does not include a formally structured pseudocode or algorithm block.
Open Source Code No The paper does not provide any explicit statements or links indicating that the source code for the methodology is openly available.
Open Datasets Yes Our testbeds are two real-world datasets Tmall6 and Amazon Review7. 6http://ijcai-15.org/index.php/repeat-buyers-prediction-competition 7http://jmcauley.ucsd.edu/data/amazon/
Dataset Splits No For each user, we randomly sample 90% of her purchase records as the training data, and use the remaining 10% as the test data. The paper specifies training and test splits but does not explicitly mention a separate validation split.
Hardware Specification Yes Table 1 summarizes the CPU time of solving problem (4) on an Intel Xeon 2.40 GHz server with 32 GB main memory.
Software Dependencies No The paper refers to various algorithms and computational methods but does not provide specific software names with version numbers (e.g., Python, PyTorch, TensorFlow versions) used for implementation.
Experiment Setup No The paper does not explicitly provide concrete hyperparameter values or detailed training configurations such as learning rates, batch sizes, or optimizer settings.