Deconvolutional Paragraph Representation Learning

Authors: Yizhe Zhang, Dinghan Shen, Guoyin Wang, Zhe Gan, Ricardo Henao, Lawrence Carin

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show empirically that compared to RNNs, our framework is better at reconstructing and correcting long paragraphs. Quantitative evaluation on semi-supervised text classification and summarization tasks demonstrate the potential for better utilization of long unlabeled text data. 4 Experiments
Researcher Affiliation Academia Yizhe Zhang Dinghan Shen Guoyin Wang Zhe Gan Ricardo Henao Lawrence Carin Department of Electrical & Computer Engineering, Duke University
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not provide any concrete access (link or explicit statement) to the source code for the described methodology.
Open Datasets Yes We use BLEU-4, ROUGE-1, 2 in our evaluation, in alignment with [52]. The comparison is performed on the Hotel Reviews datasets, following the experimental setup from [52], i.e., we only keep reviews with sentence length ranging from 50 to 250 words, resulting in 348,544 training data samples and 39,023 testing data samples. ...We consider three large-scale document classification datasets: DBPedia, Yahoo! Answers and Yelp Review Polarity [57].
Dataset Splits Yes The comparison is performed on the Hotel Reviews datasets, following the experimental setup from [52], i.e., we only keep reviews with sentence length ranging from 50 to 250 words, resulting in 348,544 training data samples and 39,023 testing data samples. ...The partition of training, validation and test sets for all datasets follows the settings from [57].
Hardware Specification Yes Compared to a standard LSTM-based RNN sequence autoencoders with roughly the same number of parameters, computations in our case are considerably faster (see experiments) using single NVIDIA TITAN X GPU.
Software Dependencies No The paper mentions "cu DNN primitives [35]" but does not specify a version number for this or any other software dependency.
Experiment Setup Yes Filter size, stride and word embedding are set to h = 5, rl = 2, for l = 1, . . . , 3 and k = 300, respectively. ...We set αmin = 0.01 in the experiments. ...The dropout rate is set to 50%. ...For character-level correction, we set dimension of h to 900.