Unsupervised Stylish Image Description Generation via Domain Layer Norm
Authors: Cheng-Kuan Chen, Zhufeng Pan, Ming-Yu Liu, Min Sun8151-8158
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental validation and user study on various stylish image description generation tasks are conducted to show the competitive advantages of the proposed model. |
| Researcher Affiliation | Collaboration | 1Department of Electrical Engineering, National Tsing Hua University 2NVIDIA |
| Pseudocode | No | The paper describes the model architecture and training process using diagrams and mathematical equations, but it does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions 'The implementation details are in the supplementary' but does not explicitly state that source code is provided or offer a link to a repository. |
| Open Datasets | Yes | We use paragraphs released in (Krause et al. 2017) (VG-Para) as our source domain dataset. ... We use humor and romance novel collections in Book Corpus (Zhu et al. 2015). |
| Dataset Splits | Yes | We use pre-split data which contain 14575, 2489 and 2487 for training, validation and testing. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper describes the use of CNNs, RNNs, Skip-Thought Vectors, and LN-LSTM, but does not provide specific version numbers for any software dependencies like programming languages, libraries, or frameworks. |
| Experiment Setup | No | The paper states 'The implementation details are in the supplementary,' but the main text does not include specific hyperparameters (e.g., learning rate, batch size) or other detailed experimental setup configurations. |