Learning Visual Sentiment Distributions via Augmented Conditional Probability Neural Network
Authors: Jufeng Yang, Ming Sun, Xiaoxiao Sun
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that the proposed methods outperform the state-of-the-art works on our large-scale datasets and other publicly available benchmarks. We evaluate our proposed methods on both of these large-scale datasets, as well as other two benchmark datasets, i.e. Abstract Paintings (Machajdik and Hanbury 2010) and Emotion6 (Peng et al. 2015). |
| Researcher Affiliation | Academia | Jufeng Yang, Ming Sun, Xiaoxiao Sun College of Computer and Control Engineering, Nankai University Tianjin, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states: 'We make the datasets publicly available to peer researchers at http://cv.nankai.edu.cn/projects/ Senti LDL', which refers to datasets, not open-source code for the methodology. |
| Open Datasets | Yes | we build two new datasets, one of which is relabeled on the popular Flickr dataset and the other is collected from Twitter. These datasets contain 20,745 images with multiple affective labels... We make the datasets publicly available to peer researchers at http://cv.nankai.edu.cn/projects/ Senti LDL, which will be beneficial to further researches in this field. We evaluate our proposed methods on both of these large-scale datasets, as well as other two benchmark datasets, i.e. Abstract Paintings (Machajdik and Hanbury 2010) and Emotion6 (Peng et al. 2015). |
| Dataset Splits | No | The paper specifies a train/test split ('we randomly select 80% of images as training set and the others for testing') but does not explicitly mention a separate validation set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. It mentions 'VGGNet' but not the hardware it ran on. |
| Software Dependencies | No | The paper mentions algorithms like 'RPROP algorithm' and models like 'VGGNet', but does not provide specific software names with version numbers (e.g., 'PyTorch 1.9', 'TensorFlow 2.0') required for replication. |
| Experiment Setup | Yes | For fair comparison, the numbers of hidden layer units of CPNN, BCPNN and ACPNN are set to the same value 100. In our experiments, the max value of v is set to 5. As Senti Bank (Borth et al. 2013) has shown its superiority to low-level features, we use it to extract mid-level features in our experiments. Meanwhile, deep features extracted with VGGNet (Simonyan and Zisserman 2015) are also applied. For each image, we use the last fully connected layer output as the sentiment representation and reduce it to 280 dimensions using principle component analysis (PCA). |