Multimodal Storytelling via Generative Adversarial Imitation Learning
Authors: Zhiqian Chen, Xuchao Zhang, Arnold P. Boedihardjo, Jing Dai, Chang-Tien Lu
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The proposed method is evaluated on newly-proposed storytelling dataset1. To guide the model to discover desirable stories, manually labeled storylines are compiled for GAN training. Generator obtained in one event dataset was tested on another event corpus. This experiment shows if the generator is capable of deriving transferable storyline. Please note that different event datasets share no entities. |
| Researcher Affiliation | Collaboration | 1Computer Science Department, Virginia Tech, Falls Church, Virginia 2U. S. Army Corps of Engineers 3Google Inc. |
| Pseudocode | Yes | Algorithm 1: Multimodal Imitation Storytelling |
| Open Source Code | No | The paper provides a link "1https://gist.github.com/aquastar/03dadfd751f5862ea0b44bb66996b490" which is described as a "newly-proposed storytelling dataset". There is no explicit statement or link indicating that the source code for the proposed MIL-GAN methodology is publicly available. |
| Open Datasets | Yes | The proposed method is evaluated on newly-proposed storytelling dataset1. To guide the model to discover desirable stories, manually labeled storylines are compiled for GAN training. ... 1https://gist.github.com/aquastar/03dadfd751f5862ea0b44bb66996b490 |
| Dataset Splits | No | The paper mentions a "Training set" and "Test set" with details on their content. However, it does not specify any validation set, nor does it provide explicit percentages or counts for train/validation/test splits, or details about cross-validation. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments, such as GPU/CPU models, memory, or other computational resources. |
| Software Dependencies | No | The paper mentions software components like "Word2Vec", "VGG19", "LSTM", and "Text CNN" but does not provide specific version numbers for any of these, nor for any other libraries or programming languages used. |
| Experiment Setup | Yes | The balance parameters λi=1,2,3 are all initialized to 1. After fine tuning, good performance often appear if more weights were assigned on text part. One good set example is [0.6, 0.3, 0.1] for [Ve, Vi , Vi Ve] separately. |