Collaborative and Attentive Learning for Personalized Image Aesthetic Assessment
Authors: Guolong Wang, Junchi Yan, Zheng Qin
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive and promising experimental results on the reviewaugmented benchmark corroborate the efficacy of our approach. |
| Researcher Affiliation | Academia | Guolong Wang1, Junchi Yan2 and Zheng Qin1 1 BNRist, School of Software, Tsinghua University, China 2 Shanghai Jiao Tong University |
| Pseudocode | No | The paper describes the network architecture and processing steps but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements or links indicating that its source code is open or publicly available. |
| Open Datasets | Yes | We evaluate the proposed method on one of the most large-scale and challenging datasets, i.e. AVA dataset for visual aesthetic quality assessment (augmented by users reviews). It contains more than 255,000 images gathered from www.dpchallenge.com |
| Dataset Splits | Yes | The hyper-parameters in our models are tuned by conducting 10-fold cross validation on the training set. We set 90% of the data as training set, and the rest is testing set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions tools like 'Core NLP' and models like 'VGG-16' but does not specify version numbers for any software dependencies. |
| Experiment Setup | Yes | The input raw images are resized to 320 320...The dimension of the attention-map is 10 10 512... The hyper-parameters in our models are tuned by conducting 10-fold cross validation on the training set... We set the original learning rate as 0.005, the decay rate as 0.99, the decay step as 1000. k is set as 50, λβ is set as 1, and λz is set as 0.015. |