Age Estimation Using Expectation of Label Distribution Learning
Authors: Bin-Bin Gao, Hong-Yu Zhou, Jianxin Wu, Xin Geng
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The effectiveness of our approach has been demonstrated on apparent and real age estimation tasks. Our method achieves new state-of-the-art results using the single model with 36 fewer parameters and 2.6 reduction in inference time. Moreover, our method can achieve comparable results as the state-of-the-art even though model parameters are further reduced to 0.9M (3.8MB disk storage). We also analyze that Ranking methods are implicitly learning label distributions. In this section, we conduct experiments to validate the effectiveness of the proposed DLDL-v2 on three benchmark age datasets, based on the open source framework Torch7. |
| Researcher Affiliation | Academia | Bin-Bin Gao1, Hong-Yu Zhou1, Jianxin Wu1, Xin Geng2 1 National Key Laboratory for Novel Software Technology, Nanjing University, China 2 MOE Key Laboratory of Computer Network and Information Integration, Southeast University, China {gaobb,zhouhy,wujx}@lamda.nju.edu.cn, xgeng@seu.edu.cn |
| Pseudocode | No | The paper describes the framework with equations and diagrams but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code and pre-trained models will be available at http:// lamda.nju.edu.cn/gaobb/Projects/DLDL-v2.html. |
| Open Datasets | Yes | Cha Learn15 is from the first competition track Cha Learn LAP 2015 [Escalera et al., 2015]. Cha Learn16 [Escalera et al., 2016] is an extension of Cha Learn15. Morph is the largest publicly available real age dataset [Ricanek and Tesafaye, 2006]. |
| Dataset Splits | Yes | The dataset has 4699 images and is split into 2476 training, 1136 validation and 1087 testing images. For each image, its mean age and the corresponding standard deviation are given. Since the ground-truth for testing images are not released, we follow the protocol from [Rothe et al., 2018; Gao et al., 2017] to train on the training set and evaluate on the validation set. Cha Learn16... They are divided into three subsets, including 4113 training, 1500 validation and 1978 testing images. Morph...we randomly divide the whole dataset into two parts, 80% of the whole dataset for training and the remain 20% for testing. |
| Hardware Specification | Yes | All experiments are conducted on an NVIDIA M40 GPU. |
| Software Dependencies | Yes | All experiments are conducted on an NVIDIA M40 GPU. 2Code and pre-trained models will be available at http:// lamda.nju.edu.cn/gaobb/Projects/DLDL-v2.html. We measure the speed on one M40 GPU with batch size 32 accelerated by cu DNN v5.1. |
| Experiment Setup | Yes | All networks are optimized by Adam [Kingma and Ba, 2015], with β1 = 0.9, β2 = 0.999 and ϵ = 10 8. The initial learning rate is 0.001, and it is decreased by a factor of 10 every 30 epochs. Each model is trained 60 epochs using mini-batches of 128. |