Harnessing Synthesized Abstraction Images to Improve Facial Attribute Recognition

Authors: Keke He, Yanwei Fu, Wuhao Zhang, Chengjie Wang, Yu-Gang Jiang, Feiyue Huang, Xiangyang Xue

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive evaluations conducted on Celeb A and LFWA benchmark datasets show that state-of-the-art performance is achieved. and We evaluate our proposed framework on benchmarks including Celeb A [Liu et al., 2015b], LFWA [Huang et al., 2007; Liu et al., 2015b] face attribute datasets and the experiment results significantly outperform the state-of-the-art alternatives.
Researcher Affiliation Collaboration Keke He1 , Yanwei Fu2 , Wuhao Zhang4, Chengjie Wang3, Yu-Gang Jiang1 , Feiyue Huang3, Xiangyang Xue1 1School of Computer Science, Shanghai Key Lab of Intelligent Information Processing, Fudan University 2School of Data Science, Fudan University 3Tencent Youtu Lab 4Shanghai Jiao Tong University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions using open-source tools and codes from other works (e.g., 'We use the open source deep learning framework Caffe [Jia et al., 2014]', 'we use the codes of [Liu et al., 2015a]', 'we use the codes of [Wang et al., 2017a]') but does not provide a statement or link for the open-source code of their own methodology.
Open Datasets Yes (1) Celeb A contains 202,599 images of approximately 10k identities [Liu et al., 2015b]. (2) LFWA [Liu et al., 2015b] is constructed based on face recognition dataset LFW [Huang et al., 2007].
Dataset Splits Yes For a fair comparison with the other methods, we follow the standard split here: the first 162, 770 images are used for training, 19, 867 images for validation and remaining 19, 962 for testing.
Hardware Specification Yes Our model trained on Celeb A dataset gets converged with 46k iterations and it takes 10 hours with one NVIDIA Tesla M40 GPU. and It takes 37 hours with one NVIDIA Tesla M40 GPU and needs around 13 GB GPU memory.
Software Dependencies No The paper mentions using 'Caffe' and 'Adam' but does not specify version numbers for these software components or any other libraries.
Experiment Setup Yes The base learning rate is set as 0.001 and gradually decreased by 1/10 at 20k, 45k iterations. The input image is resized to 224x224. For training all the model, the batch size is 20... and We use Adam with the learning rate of 0.0002 to optimize our abstraction network. The batch size is 1.