Asymmetrical Hierarchical Networks with Attentive Interactions for Interpretable Review-Based Recommendation

Authors: Xin Dong, Jingchao Ni, Wei Cheng, Zhengzhang Chen, Bo Zong, Dongjin Song, Yanchi Liu, Haifeng Chen, Gerard de Melo7667-7674

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on a variety of real datasets demonstrate the effectiveness of our method. We conduct experiments on 10 real datasets. The results demonstrate that AHN consistently outperforms the state-of-the-art methods by a large margin, meanwhile providing good interpretations of the predictions.
Researcher Affiliation Collaboration Xin Dong,1 Jingchao Ni,2 Wei Cheng,2 Zhengzhang Chen,2 Bo Zong,2 Dongjin Song,2 Yanchi Liu,2 Haifeng Chen,2 Gerard de Melo1 1Rutgers University, 2NEC Laboratories America
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. It describes the model using mathematical equations and textual explanations.
Open Source Code No The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper.
Open Datasets Yes We conducted experiments on 10 different datasets, including 9 Amazon product review datasets for 9 different domains, and the large-scale Yelp challenge dataset. For the Yelp dataset, we follow (Seo et al. 2017) to focus on restaurants in the AZ metropolitan area. 1https://www.yelp.com/dataset/challenge
Dataset Splits Yes For each dataset, we randomly split the user item pairs into 80% training set, 10% validation set, and 10% testing set.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. It only mentions experiments were conducted.
Software Dependencies No The paper mentions several software components and libraries (e.g., GloVe, Adam, Bi-LSTM) by referencing their corresponding research papers, but it does not specify version numbers for these components, which is required for reproducibility.
Experiment Setup Yes The dimensionality of the hidden states of the Bi LSTM is set to 150. The dimensionality of the user and item ID embeddings are set to 300. The dimensionality of Ms (Mr) in Eq. (6) (Eq. (11)) is 300. We apply dropout (Srivastava et al. 2014) with rate 0.5 after the fully connected layer to alleviate the overfitting problem. The loss function is optimized by Adam (Kingma and Ba 2014), with a learning rate of 0.0002 and a maximum of 10 epochs. For all methods, the dimensionality of the word embedding is set to 300.