Style Transfer from Non-Parallel Text by Cross-Alignment

Authors: Tianxiao Shen, Tao Lei, Regina Barzilay, Tommi Jaakkola

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of this cross-alignment method on three tasks: sentiment modification, decipherment of word substitution ciphers, and recovery of word order.
Researcher Affiliation Collaboration Tianxiao Shen1 Tao Lei2 Regina Barzilay1 Tommi Jaakkola1 1MIT CSAIL 2ASAPP Inc. 1{tianxiao, regina, tommi}@csail.mit.edu 2tao@asapp.com
Pseudocode Yes Algorithm 1 Cross-aligned auto-encoder training.
Open Source Code Yes Our code and data are available at https://github.com/shentianxiao/language-style-transfer.
Open Datasets Yes We run experiments on Yelp restaurant reviews, utilizing readily available user ratings associated with each review.
Dataset Splits No The paper mentions dataset sizes for positive and negative sentences (250K negative, 350K positive) and sizes for development and test sets (100K parallel sentences), but it does not specify explicit training/validation/test dataset splits with percentages or exact counts for the main dataset used for training.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions the use of RNNs with GRU cells, VAEs, Professor-Forcing algorithm, and Text CNN model, but it does not specify version numbers for any of the software libraries, frameworks, or dependencies used.
Experiment Setup Yes The hyper-parameters are set as λ = 1, γ = 0.001 and learning rate is 0.0001 for all experiments in this paper.