Predicting Aesthetic Score Distribution Through Cumulative Jensen-Shannon Divergence
Authors: Xin Jin, Le Wu, Xiaodong Li, Siyu Chen, Siwei Peng, Jingying Chi, Shiming Ge, Chenggen Song, Geng Zhao
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on large scale aesthetic dataset demonstrate the effectiveness of our introduced CJS-CNN in this task. |
| Researcher Affiliation | Academia | 1Department of Computer Sci. and Tech., Beijing Electronic Science and Technology Institute, Beijing, 100070, China 2Department of Info. Sec., Beijing Electronic Science and Technology Institute, Beijing, 100070, China 3College of Info. Sci. and Tech., Beijing University of Chemical Technology, Beijing 100029, China 4Institute of Information Engineering, Chinese Academy of Sciences, Beijing, 100093, China |
| Pseudocode | No | No structured pseudocode or algorithm blocks labeled "Algorithm" or "Pseudocode" were found in the paper. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing the source code for the methodology described, nor does it include a link to a code repository. |
| Open Datasets | Yes | Experimental results on large scale aesthetic dataset demonstrate the effectiveness of our introduced CJS-CNN in this task. ... Images are from the AVA dataset (Murray, Marchesotti, and Perronnin 2012), which contains a list of photo IDs from www.dpchallenge.com. |
| Dataset Splits | No | The paper states: 'The training and test sets contain 235,599 and 19,930 images respectively.' It does not explicitly mention a separate validation split with its size or percentage for the main experiments. |
| Hardware Specification | Yes | The training time is about 3 days using GTX980-Ti GPU and about 2 days using Titan X Pascal GPU. |
| Software Dependencies | No | The paper states: 'We use the Caffe framework (Jia et al. 2014) to train and test our models.' However, it does not specify a version number for Caffe or any other software dependencies. |
| Experiment Setup | Yes | We use the Caffe framework (Jia et al. 2014) to train and test our models. The learning policy is set to step. Stochastic gradient descent is used to train our model with a mini-batch size of 48 images, a momentum of 0.9, a gamma of 0.5 and a weight decay of 0.0005. The max number of iterations is 480000. |