Complementary Auxiliary Classifiers for Label-Conditional Text Generation
Authors: Yuan Li, Chunyuan Li, Yizhe Zhang, Xiujun Li, Guoqing Zheng, Lawrence Carin, Jianfeng Gao8303-8310
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To establish a comprehensive benchmark fostering future research, we consider a suite of four datasets, and systematically reproduce three representative methods. CARA shows consistent improvement over the previous methods on the task of label-conditional text generation, and achieves state-of-the-art on the task of attribute transfer. Quantitative and qualitative experimental results demonstrate that the proposed techniques consistently shows improved performance. |
| Researcher Affiliation | Collaboration | Yuan Li,1 Chunyuan Li,2 Yizhe Zhang,2 Xiujun Li,2 Guoqing Zheng,2 Lawrence Carin,1 Jianfeng Gao2 1Duke University, 2Microsoft Research, Redmond |
| Pseudocode | No | The paper provides mathematical formulations and a system diagram (Figure 2), but no structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code and experiment setup is available at Github 1. https://github.com/s1155026040/CARA |
| Open Datasets | Yes | We consider a suite of four datasets... summarized in Table 1. Personality captioning (Shuster et al. 2019), Style captioning (Gan et al. 2017), Yahoo dataset (Zhang, Zhao, and Le Cun 2015), Yelp dataset with binary sentiment labels (Shen et al. 2017). |
| Dataset Splits | Yes | Dataset Attribute Train Valid Test (Table 1 provides specific sample counts for each split, e.g., for Personality Captioning - Happy: Train 864, Valid 30, Test 39). |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions using BERT as an encoder, but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, or other library versions). |
| Experiment Setup | No | The paper states "We provide training details in the Appendix." This indicates that specific hyperparameters and detailed experimental setup are not included in the main text provided. |