Grapy-ML: Graph Pyramid Mutual Learning for Cross-Dataset Human Parsing
Authors: Haoyu He, Jing Zhang, Qiming Zhang, Dacheng Tao10949-10956
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The source code is publicly available at https://github.com/Charleshhy/Grapy-ML. Experiments In this section, we evaluated Grapy-ML on three datasets including PASCAL-Person Part dataset (Chen et al. 2014), CIHP (Gong et al. 2018) and ATR (Liang et al. 2015a) from three aspects: quantitative comparison, visual inspection, and ablation study. |
| Researcher Affiliation | Academia | Haoyu He, Jing Zhang, Qiming Zhang, Dacheng Tao UBTECH Sydney AI Centre, School of Computer Science, Faculty of Engineering, The University of Sydney, Darlington, NSW 2008, Australia {hahe7688, qzha250}@uni.sydney.edu.au, {jing.zhang1, dacheng.tao}@sydney.edu.au |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code is publicly available at https://github.com/Charleshhy/Grapy-ML. |
| Open Datasets | Yes | PASCAL-Person-Part (Chen et al. 2014) dataset contains 3,535 annotated images distributed to 1,717 for training and 1,818 for testing, among which only 7 categories are labeled including 6 human body parts and the background. 7,700 images with 18 categories are provided in ATR dataset (Liang et al. 2015a), and (Liang et al. 2017) enriched it with 10000 more annotated images. CIHP dataset (Gong et al. 2018) is a popular multi-person one that has 28,280 images for training, 5,000 images for testing and 5,000 images for validation with 20 categories. |
| Dataset Splits | Yes | CIHP dataset (Gong et al. 2018) is a popular multi-person one that has 28,280 images for training, 5,000 images for testing and 5,000 images for validation with 20 categories. |
| Hardware Specification | Yes | All experiments were conducted using two NVIDIA Tesla V100 GPUs with batch size 10. |
| Software Dependencies | No | The paper mentions software like Deep Lab v3+ and Xception as components but does not provide specific version numbers for these or other software dependencies like Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | We adopt Deep Lab v3+ (Chen et al. 2018) with Xception (Chollet 2017) as our backbone network and the output stride was set to 16. All experiments were conducted using two NVIDIA Tesla V100 GPUs with batch size 10. The training images were augmented by a random resize from 0.5 to 2, 512 × 512 cropping and horizontal flipping. We used a poly learning policy during training (Chen et al. 2018). λ in Eq. (10) was set to 1. Specifically, we first pretrained the backbone network for 100 epochs with an initial learning rate of 0.007 for the GPM model. Note that the pretrain stage was necessary since we needed a good initial prediction to calculate graph node features. Then the whole model was trained for another 200, 100 and 100 epochs on PASCAL, CIHP, and ATR dataset by decreasing the learning rate to 0.0007. |