Distraction-Aware Feature Learning for Human Attribute Recognition via Coarse-to-Fine Attention Mechanism

Authors: Mingda Wu, Di Huang, Yuanfang Guo, Yunhong Wang12394-12401

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted on the WIDER-Attribute and RAP databases, and state-of-the-art results are achieved, demonstrating the effectiveness of the proposed approach.
Researcher Affiliation Academia 1Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing, China 2IRIP Lab, School of Computer Science and Engineering, Beihang University, Beijing, China
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing source code for the described methodology or a direct link to a code repository.
Open Datasets Yes Specifically, we train the segmentation network on the MS-COCO dataset, where FPN (Lin et al. 2017) is employed as the backbone network.
Dataset Splits No If the validation set is added to the training one as (Zhu et al. 2017a) does, an m AP of 87.2%/88.0% is obtained.
Hardware Specification Yes Our model is trained on a single NVIDIA 1080Ti GPU.
Software Dependencies No The paper does not provide specific version numbers for ancillary software dependencies such as libraries, frameworks, or solvers.
Experiment Setup Yes The stochastic gradient descent algorithm is utilized in the training process, with a batch size of 32, a momentum of 0.9 and a weight decay of 0.0005. The initial learning rate is set to 0.003, and gamma is set to 0.1.