On Variational Learning of Controllable Representations for Text without Supervision

Authors: Peng Xu, Jackie Chi Kit Cheung, Yanshuai Cao

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, our method outperforms unsupervised baselines and strong supervised approaches on text style transfer, and is capable of performing more flexible fine-grained control over text generation than existing methods.
Researcher Affiliation Collaboration 1Borealis AI 2Mc Gill University 3Canada CIFAR Chair, Mila. Correspondence to: Peng Xu <peng.z.xu@borealisai.com>.
Pseudocode No The paper describes the proposed method in prose but does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes The code to reproduce our results can be found in https: //github.com/Borealis AI/CP-VAE
Open Datasets Yes To perform unsupervised sentiment manipulation, we use the Yelp restaurant reviews dataset and the same data split following Li et al. (2018).
Dataset Splits Yes To perform unsupervised sentiment manipulation, we use the Yelp restaurant reviews dataset and the same data split following Li et al. (2018).
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions software components like 'GPT-2' and 'GloVe embeddings' as models or resources used, but does not specify programming languages, libraries, or solvers with version numbers required for reproducibility.
Experiment Setup Yes Detailed configurations including the hyperparameters, model architecture, training regimes, and decoding strategy are found in Appendix C.