Revision in Continuous Space: Unsupervised Text Style Transfer without Adversarial Learning

Authors: Dayiheng Liu, Jie Fu, Yidan Zhang, Chris Pal, Jiancheng Lv8376-8383

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental studies on three popular text style transfer tasks show that the proposed method significantly outperforms five state-of-the-art methods.
Researcher Affiliation Academia College of Computer Science, Sichuan University Qu ebec Artificial Intelligence Institute (Mila), Polytechnique Montr eal
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The source code is available at https://github.com/dayihengliu/Fine-Grained-Style-Transfer.
Open Datasets Yes We use two datasets, Yelp restaurant reviews and Amazon product reviews (He and Mc Auley 2016)2... These datasets can be download at http://bit.ly/2LHMUsl. We use the same dataset4 as in (Prabhumoye et al. 2018), which contains reviews from Yelp... This dataset can be download at http://tts.speech.cs.cmu.edu/ style models/gender classifier.tar.
Dataset Splits No The paper states 'Following their experimental settings, we use the same pre-processing steps and similar experimental configurations' referring to prior works, but it does not explicitly provide the specific training, validation, or test dataset splits (e.g., percentages or counts) within its own text.
Hardware Specification No The paper does not specify the hardware used to run the experiments (e.g., CPU, GPU models, or cloud instance types).
Software Dependencies No The paper does not provide specific software dependencies or their version numbers (e.g., programming languages, libraries, or frameworks).
Experiment Setup No The paper mentions 'similar experimental configurations' to prior works but does not explicitly provide concrete hyperparameter values or detailed system-level training settings within the provided text.