Sentiment-Controllable Chinese Poetry Generation
Authors: Huimin Chen, Xiaoyuan Yi, Maosong Sun, Wenhao Li, Cheng Yang, Zhipeng Guo
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show our model can control the sentiment of not only a whole poem but also each line, and improve the poetry diversity against the state-of-the-art models without losing quality. |
| Researcher Affiliation | Academia | 1Department of Computer Science and Technology, Tsinghua University 2Institute for Artificial Intelligence, Tsinghua University 3Beijing National Research Center for Information Science and Technology, Tsinghua University 4State Key Lab on Intelligent Technology and Systems, Tsinghua University |
| Pseudocode | No | The paper describes its methods using mathematical formulations and textual explanations but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our source code and the Fine-grained Sentiment Poetry Corpus will be available at https://github.com/THUNLP-AIPoet. |
| Open Datasets | Yes | Due to the lack of off-the-shelf sentimental poetry corpus, we first build a fine-grained manually-labelled sentimental Chinese corpus. ... Our source code and the Fine-grained Sentiment Poetry Corpus will be available at https://github.com/THUNLP-AIPoet. |
| Dataset Splits | Yes | For unlabelled data, we randomly select 4,500 poems for validation and testing respectively and the rest for training. ... For labelled data, we use 500 poems for validation and testing respectively. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory, or other computer specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Adam [Kingma and Ba, 2015] with mini-batches (batch size 64) is used for optimization.' but does not specify versions for any programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow). |
| Experiment Setup | Yes | The dimensions of word embedding, sentiment embedding and latent variable are 256, 32, 128 respectively. The hidden state size is 512 for the encoder, decoder and content sequence; 64 for the sentiment sequence. Adam [Kingma and Ba, 2015] with mini-batches (batch size 64) is used for optimization. We also use dropout (keep ratio=0.75) to avoid overfitting. For testing, all models generate poems with beam search (beam size = 20). |