IRC-GAN: Introspective Recurrent Convolutional GAN for Text-to-video Generation
Authors: Kangle Deng, Tianyi Fei, Xin Huang, Yuxin Peng
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on 3 datasets and compare with state-of-the-art methods. |
| Researcher Affiliation | Academia | Kangle Deng , Tianyi Fei , Xin Huang and Yuxin Peng Institute of Computer Science and Technology, Peking University, Beijing, China pengyuxin@pku.edu.cn |
| Pseudocode | No | The paper does not contain any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not mention releasing source code or provide any links to a code repository. |
| Open Datasets | No | The paper states, 'Since [Mittal et al., 2016] didn t make open their dataset or source codes to construct it, we construct our own KTH-4 according to the method mentioned in [Mittal et al., 2016].' While they use existing datasets as a basis, there is no explicit statement or link confirming the public availability of their *modified* datasets used in the experiments. |
| Dataset Splits | No | The paper mentions using '3 datasets' but does not specify the training, validation, or test splits (e.g., percentages or sample counts) used for these datasets. |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., GPU model, CPU type, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not list any specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, TensorFlow 2.x). |
| Experiment Setup | Yes | So it is natural to perform a two-stage training process: first pre-training the text encoder and then training the whole model with the encoder fixed. |