Label Enhancement for Label Distribution Learning via Prior Knowledge
Authors: Yongbiao Gao, Yu Zhang, Xin Geng
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that the proposed approach outperforms the state-of-the-art methods in both age estimation and image emotion recognition. |
| Researcher Affiliation | Academia | School of Computer Science and Engineering, Southeast University, Nanjing, China {gaoyb, zhang yu, xgeng}@seu.edu.cn |
| Pseudocode | No | The paper describes its methodology in text and mathematical formulas but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about the availability of open-source code or links to a code repository for the described methodology. |
| Open Datasets | Yes | Two datasets are used in this application. The first one is the FG-NET Aging dataset [Lanitis et al., 2002]... The second dataset is the much larger MORPH dataset [Ricanek and Tesafaye, 2006]... We execute our experiments on two image emotion distribution datasets, Flickr LDL and Twitter LDL [Yang et al., 2017b] |
| Dataset Splits | No | We randomly select 80% for training and the remaining 20% for testing. The paper does not explicitly mention a separate validation dataset split. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions deep learning models like VGGNET and algorithms like Q-learning, but does not provide specific version numbers for any software libraries, frameworks, or programming languages used in the experiments. |
| Experiment Setup | Yes | For both models, the learning rate is 0.001, the batch size is 64, the discount factor γ is 0.9. And the size of the prioritized replay is 5000. We use the ϵ greedy method to select the action for exploration. |