Variational Memory Encoder-Decoder

Authors: Hung Le, Truyen Tran, Thin Nguyen, Svetha Venkatesh

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically compare the proposed model against other recent approaches on various conversational datasets. The results show that VMED consistently achieves significant improvement over others in both metric-based and qualitative evaluations.
Researcher Affiliation Academia Hung Le, Truyen Tran, Thin Nguyen and Svetha Venkatesh Applied AI Institute, Deakin University, Geelong, Australia {lethai,truyen.tran,thin.nguyen,svetha.venkatesh}@deakin.edu.au
Pseudocode Yes Algorithm 1 VMED Generation
Open Source Code Yes Source code is available at https://github.com/thaihungle/VMED
Open Datasets Yes We perform experiments on two collections: The first collection includes open-domain movie transcript datasets containing casual conversations: Cornell Movies1 and Open Subtitle2. ... 1http://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html 2http://opus.nlpl.eu/Open Subtitles.php
Dataset Splits Yes For each dataset, we use 10,000 conversations for validating and 10,000 for testing.
Hardware Specification No The paper does not specify any particular hardware components (e.g., GPU/CPU models, memory, or specific computing platforms) used for conducting the experiments.
Software Dependencies No The paper mentions general software components and frameworks like RNNs, LSTMs, and Glove for word embeddings, but it does not specify exact version numbers for these or any other software dependencies.
Experiment Setup No The paper mentions trying the model with 'different number of modes (K = 1, 2, 3, 4)' which is a hyperparameter, but it states 'Details of dataset descriptions and model implementations are included in Supplementary material', implying other crucial experimental setup details like learning rates, batch sizes, or optimizer settings are not in the main text.