News Content Completion with Location-Aware Image Selection
Authors: Zhengkun Zhang, Jun Wang, Adam Jatowt, Zhe Sun, Shao-Ping Lu, Zhenglu Yang14498-14505
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the consistent superiority of the proposed framework in image selection. ... Experiments Datasets and Implement Details We conduct experiments on two real-world datasets: (1) NYTimes800k ... (2) MSMO ... |
| Researcher Affiliation | Academia | Zhengkun Zhang1, Jun Wang2, Adam Jatowt3, Zhe Sun4*, Shao-Ping Lu1, Zhenglu Yang1* 1TKLNDST, CS, Nankai University, China, 2Ludong University, China, 3Kyoto University, Japan, 4Computational Engineering Applications Unit, RIKEN, Japan |
| Pseudocode | No | The paper describes the proposed method in narrative text and mathematical equations, but it does not include pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We conduct experiments on two real-world datasets: (1) NYTimes800k (Tran, Mathews, and Xie 2020) consists of 444,914 articles collected from the New York Times public API1. ... (2) MSMO (Zhu et al. 2018) contains 307,993 articles collected from the Daily Mail website2. ... 1https://developer.nytimes.com/apis 2http://www.dailymail.co.uk |
| Dataset Splits | Yes | We use the same data split setting as in (Tran, Mathews, and Xie 2020), that is, the training and validation splits contain 433,561 and 2,978 articles, respectively. We report results of the test set with 8,375 articles. ... The number of documents in our training and validation splits are 287,467 and 10,265, respectively3. Similar to (Zhu et al. 2018), we compute the results on the test set with 10,261 articles. |
| Hardware Specification | No | The paper mentions training the model but does not provide specific details about the hardware used (e.g., GPU models, CPU types, memory). |
| Software Dependencies | No | The paper mentions using VGG19, RoBERTa model, Transformer, and Adam optimizer, but does not provide specific version numbers for these software components or any underlying libraries/frameworks. |
| Experiment Setup | Yes | The size and the number of the transformer encoder are 512 and 1, respectively, with a dropout of 0.1. ... We train our model using Adam optimizer with a mini-batch size of 16 for 50 epochs on each dataset. The initial learning rate is 0.0001, decayed by 2 every 10 epochs. |