Factual and Informative Review Generation for Explainable Recommendation
Authors: Zhouhang Xie, Sameer Singh, Julian McAuley, Bodhisattwa Prasad Majumder
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on Yelp, Trip Advisor, and Amazon Movie Reviews dataset show our model could generate explanations that more reliably entail existing reviews, are more diverse, and are rated more informative by human evaluators. |
| Researcher Affiliation | Academia | 1 University of California, San Diego 2 University of California, Irvine {zhx022, jmcauley, bmajumde}@ucsd.edu, sameer@uci.edu |
| Pseudocode | No | The paper includes architectural diagrams (Figure 1, Figure 2) but no explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1https://github.com/zhouhanxie/PRAG |
| Open Datasets | Yes | We conduct our experiments on three publicly available dataset and splits from various domains (Li, Zhang, and Chen 2020): Yelp4 (restaurant reviews), Trip Advisor5 (hotel) and Amazon Movies and TV (He and Mc Auley 2016). (...) 4https://www.yelp.com/dataset/challenge 5https://www.tripadvisor.com |
| Dataset Splits | Yes | We conduct our experiments on three publicly available dataset and splits from various domains (Li, Zhang, and Chen 2020): Yelp4 (restaurant reviews), Trip Advisor5 (hotel) and Amazon Movies and TV (He and Mc Auley 2016). Note that the data splits guarantee that products in the test set always appear in the training set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like MPNET, Huggingface Transformers, GPT-2, Unified QA, and T5 but does not specify their version numbers, which is crucial for reproducibility. |
| Experiment Setup | No | The paper lacks specific details regarding experimental setup, such as hyperparameter values (e.g., learning rate, batch size, number of epochs), optimizer settings, or other system-level training configurations in the main text. |