MSML: Enhancing Occlusion-Robustness by Multi-Scale Segmentation-Based Mask Learning for Face Recognition
Authors: Ge Yuan, Huicheng Zheng, Jiayu Dong3197-3205
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on face verification and recognition under synthetic or realistic occlusions demonstrate the effectiveness of our method compared to state-of-the-art methods. |
| Researcher Affiliation | Academia | Ge Yuan, Huicheng Zheng*, Jiayu Dong School of Computer Science and Engineering, Sun Yat-sen University Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, China Guangdong Key Laboratory of Information Security Technology zhenghch@mail.sysu.edu.cn |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1The code is available at: https://github.com/ygtxr1997/MSML. |
| Open Datasets | Yes | L29 is pretrained on MS-Celeb-1M (Guo et al. 2016) and trained on CASIA-Web Face (Yi et al. 2014), while A18 is trained on MS1MV2 (Deng et al. 2019) from scratch. |
| Dataset Splits | No | The paper does not explicitly describe a validation dataset split (e.g., percentage or count for validation samples) used for hyperparameter tuning or model selection during training. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., GPU model, CPU type, or memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like "stochastic gradient descent (SGD)" but does not provide specific version numbers for any libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages. |
| Experiment Setup | Yes | We employ stochastic gradient descent (SGD) as the optimizer. The weight decay is set to 10 5 and the momentum is set to 0.9. For MSML(L29), the initial learning rate of FRB and OSB are 0.001 and 0.01 respectively and are divided by 3 every 15 epochs. For MSML(A18), the initial learning rate of FRB and OSB are 0.1 and 0.01 respectively and are divided by 10 at 11, 16, 21 epochs. We set the batch size of 64 for MSML(L29) and 512 for MSML(A18). |