LLMRG: Improving Recommendations through Large Language Model Reasoning Graphs
Authors: Yan Wang, Zhixuan Chu, Xin Ouyang, Simeng Wang, Hongyan Hao, Yue Shen, Jinjie Gu, Siqiao Xue, James Zhang, Qing Cui, Longfei Li, Jun Zhou, Sheng Li
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To evaluate our proposed method, we conduct experiments on three benchmark datasets: the Amazon Beauty, Amazon-Clothing, and Movie Lens-1M (ML-1M) datasets (Mc Auley et al. 2015; Harper and Konstan 2015). |
| Researcher Affiliation | Collaboration | 1Ant Group 2University of Virginia |
| Pseudocode | No | The paper includes architectural diagrams and descriptions of its modules but does not present any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code, nor does it include a link to a code repository. |
| Open Datasets | Yes | To evaluate our proposed method, we conduct experiments on three benchmark datasets: the Amazon Beauty, Amazon-Clothing, and Movie Lens-1M (ML-1M) datasets (Mc Auley et al. 2015; Harper and Konstan 2015). |
| Dataset Splits | No | To evaluate the performance of our recommendation system, we utilize a leave-one-out strategy where we repeatedly hold out one item from each user s sequence of interactions. |
| Hardware Specification | No | The paper discusses the use of LLMs like GPT3.5 and GPT4 but does not specify any hardware (e.g., GPU models, CPU types, memory) used for running the experiments or training the LLMRG model. |
| Software Dependencies | No | The paper mentions using GPT3.5 or GPT4 and base models like SR-GNN, but it does not provide specific version numbers for any software dependencies, such as programming languages, libraries, or frameworks. |
| Experiment Setup | No | The 'Settings' section details the datasets and evaluation metrics (HR@n, NDCG@n) but does not provide specific experimental setup details such as hyperparameter values (learning rate, batch size, epochs), optimizer settings, or other system-level training configurations. |