Content preserving text generation with attribute controls

Authors: Lajanugen Logeswaran, Honglak Lee, Samy Bengio

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through quantitative, qualitative and human evaluations we demonstrate that our model is capable of generating fluent sentences that better reflect the conditioning information compared to prior methods.
Researcher Affiliation Collaboration University of Michigan, :Google Brain
Pseudocode No The paper describes the model narratively and with equations but does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not include an unambiguous statement from the authors about releasing their source code for the described methodology, nor does it provide a direct link to a code repository.
Open Datasets Yes We use the restaurant reviews dataset from [22]. The dataset is a filtered version of the Yelp reviews dataset. Similar to [18], we use the IMDB move review corpus from [33].
Dataset Splits Yes The aligned data was split as 17k pairs for training and 2k, 1k pairs respectively for development and test.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper mentions general components like GRU RNNs and GloVe embeddings but does not provide specific version numbers for any software dependencies (e.g., programming languages, libraries, or frameworks) used in the experiments.
Experiment Setup Yes For all tasks we use a GRU (Gated Recurrent Unit [28]) RNN with hidden state size 500 as the encoder Genc. Attribute embeddings of size 200 and a decoder GRU with hidden state size 700 were used (These parameters are identical to [22]). The interpolation probability Γ P t0, 0.1, 0.2, .., 1.0u and weight of the adversarial loss λ P t0.5, 1.0, 1.5u are chosen based on the validation metrics above.