Turbo Learning for CaptionBot and DrawingBot
Authors: Qiuyuan Huang, Pengchuan Zhang, Dapeng Wu, Lei Zhang
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on the COCO dataset demonstrate that the proposed turbo learning can significantly improve the performance of both Caption Bot and Drawing Bot by a large margin. |
| Researcher Affiliation | Collaboration | Qiuyuan Huang Microsoft Research Redmond, WA, USA...Dapeng Wu University of Florida Gainesville, FL, USA |
| Pseudocode | No | The paper describes the training procedure in text and uses figures to illustrate architectures, but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | To evaluate the performance of our proposed approach, we use the COCO dataset [36]. ... [36] COCO, Coco dataset for image captioning, http://mscoco.org/dataset/#download, 2017. |
| Dataset Splits | Yes | We use the same pre-defined splits as in [8, 1]: 113,287 images for training, 5,000 images for validation, and 5,000 images for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory, or cloud instance types) used for running its experiments. |
| Software Dependencies | No | The paper states 'The model is implemented in Tensor Flow [34]' but does not provide a specific version number for TensorFlow or any other software libraries. |
| Experiment Setup | Yes | We empirically set β1 = β2 = 0.5. ... We set β1 = 0.85. |