A New Dataset and Boundary-Attention Semantic Segmentation for Face Parsing
Authors: Yinglu Liu, Hailin Shi, Hao Shen, Yue Si, Xiaobo Wang, Tao Mei11637-11644
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on our La Pa benchmark and the public Helen dataset show the superiority of our proposed method. |
| Researcher Affiliation | Industry | Yinglu Liu, Hailin Shi, Hao Shen, Yue Si, Xiaobo Wang, Tao Mei JD AI, Beijing, China {liuyinglu1, shihailin, shenhao, siyue, wangxiaobo8, tmei}@jd.com |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper provides a link for the dataset but not for the open-source code of the methodology described in the paper. The footnote 1 links to "https://github.com/lucia123/lapa-dataset" which is explicitly for the dataset. |
| Open Datasets | Yes | It consists of more than 22,000 facial images... The dataset is publicly accessible to the community for boosting the advance of face parsing.1 ... 1https://github.com/lucia123/lapa-dataset |
| Dataset Splits | Yes | We randomly select 2,000 images for validation and 2,000 images for test, and the remaining 18,176 images are taken as the training set. |
| Hardware Specification | Yes | Our experiments are implemented by Pytorch framework, and all the models are trained on 4 NVIDIA Tesla P40 with 24GB memory. |
| Software Dependencies | No | The paper mentions "Pytorch framework" but does not specify any version numbers for Pytorch or other ancillary software dependencies. |
| Experiment Setup | Yes | For both La Pa and Helen datasets, we adopt similar network configurations. ... The input size of the network is 473 x 473, and dilation convolution is used in the last residual block... We use mini-batch gradient descent as the optimizer with the momentum of 0.9, weight decay of 0.0005 and batch size of 64. Poly learning rate policy is used to update parameters and the initial learning rate is set to 0.001. Synchronized Batch Normalization is adopted... For simplicity, the λ1, λ2 and λ3 are all set to 1. Parameter α are determined on the validation set. |