Visual Dialogue State Tracking for Question Generation

Authors: Wei Pang, Xiaojie Wang11831-11838

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on Guess What?! dataset show that our model significantly outperforms existing methods and achieves new state-of-the-art performance.
Researcher Affiliation Academia Wei Pang, Xiaojie Wang Center for Intelligence Science and Technology, School of Computer Science, Beijing University of Posts and Telecommunication {pangweitf, xjwang}@bupt.edu.cn
Pseudocode No The paper does not contain explicitly labeled pseudocode or algorithm blocks.
Open Source Code No Our code and other materials will be published in the near future.
Open Datasets Yes We evaluate our model on the Guess What?! dataset introduced in (de Vries et al. 2017).
Dataset Splits Yes We use the standard partition of the dataset to the training (70%), validation (15%) and test (15%) set as in (de Vries et al. 2017; Strub et al. 2017).
Hardware Specification No The paper does not specify any hardware details such as GPU models, CPU types, or memory specifications used for experiments.
Software Dependencies No The paper mentions software components like 'Faster-RCNN', 'LSTM', 'Adam optimizer', 'REINFORCE', 'VGG network', 'Res Net152', 'swish activation', but does not provide specific version numbers for these or other libraries/frameworks.
Experiment Setup Yes We train the Guesser and Oracle model for 30 epochs, and pre-train the QGen model for 50 epochs, using Adam optimizer (Kingma and Ba 2015) with a learning rate of 1e-4 and a batch size of 64. ... post-train the QGen model with REINFORCE (Williams 1992; Sutton et al. 2000) for 500 epochs, using stochastic gradient descent (SGD) with a learning rate of 1e-3 and a batch size of 64.