Contrastive Learning for Image Captioning
Authors: Bo Dai, Dahua Lin
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We tested our method on two challenging datasets, where it improves the baseline model by significant margins. |
| Researcher Affiliation | Academia | Bo Dai Dahua Lin Department of Information Engineering, The Chinese University of Hong Kong db014@ie.cuhk.edu.hk dhlin@ie.cuhk.edu.hk |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | No explicit statement about providing open-source code or a link to a code repository for the methodology was found. |
| Open Datasets | Yes | We use two large scale datasets to test our contrastive learning method. The first dataset is MSCOCO [13]... A more challenging dataset, Insta PIC-1.1M [18], is used as the second dataset... |
| Dataset Splits | Yes | The first dataset is MSCOCO [13], which contains 122, 585 images for training and validation. Following splits in [15], we reserved 2, 000 images for validation. ... In practice, we reserved 2, 000 images from the training set for validation. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory, or specific computing environments with specifications) used for running experiments were mentioned in the paper. |
| Software Dependencies | No | The paper mentions using "Adam optimizer" but does not specify any software versions for programming languages, libraries, or other dependencies. |
| Experiment Setup | Yes | In all our experiments, we fixed the learning rate to be 1e-6 for all components, and used Adam optimizer. |