Aspect-Level Sentiment-Controllable Review Generation with Mutual Learning Framework

Authors: Huimin Chen, Yankai Lin, Fanchao Qi, Jinyi Hu, Peng Li, Jie Zhou, Maosong Sun12639-12647

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show our model can achieve aspect-sentiment control accuracy up to 88% without losing generation quality.
Researcher Affiliation Collaboration 1School of Journalism and Communication, Tsinghua University, Beijing, China 2Department of Computer Science and Technology, Tsinghua University, Beijing, China Institute for Artificial Intelligence, Tsinghua University, Beijing, China State Key Lab on Intelligent Technology and Systems, Tsinghua University, Beijing, China 3Pattern Recognition Center, We Chat AI, Tencent Inc., China
Pseudocode No The paper describes its methods using text and mathematical equations, but it does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement or link regarding the availability of its source code.
Open Datasets Yes We conduct experiments of the ASRG task on two realworld datasets: Yelp Restaurant dataset 1 and Rate Beer dataset (Mc Auley, Leskovec, and Jurafsky 2012). 1https://www.yelp.com/dataset
Dataset Splits Yes We use 500 reviews in each dataset for supervised training, and 250 reviews for validation and test, respectively.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU or CPU models, or memory specifications.
Software Dependencies No The paper mentions 'Adam (Kingma and Ba 2014) is used for optimization' and 'NLTK' (with a URL) but does not provide specific version numbers for these or any other software dependencies needed for replication.
Experiment Setup Yes The dimension of word embeddings is 512, and the embedding dimensions of user, product, and overall sentiment are all set to 256. In the review generator, the sizes of outline and aspect-sentiment representations are 512. The hidden states in the sentence decoder and sentiment classifier are also 512-dimensional. We tune the hyper-parameters on the validation set, and set α and β in the generator to 0.3 and 0.5, respectively. λ in the classifier is set to 0.05. Adam (Kingma and Ba 2014) is used for optimization, and the batch sizes of both the generator and classifier are 256. We also use dropout (drop rate = 0.25) to avoid over-fitting. Training of the generator is stopped when the perplexity on the validation set no longer decreases, with max epoch number of 20.