Learning Flexibly Distributional Representation for Low-quality 3D Face Recognition
Authors: Zihui Zhang, Cuican Yu, Shuang Xu, Huibin Li3465-3473
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments show that our proposed method improves both low-quality and cross-quality 3D FR performances on low-quality benchmarks. Furthermore, the improvements are more remarkable on low-quality 3D faces when the intensity of noise increases which indicate the robustness. |
| Researcher Affiliation | Academia | Zihui Zhang, Cuican Yu, Shuang Xu, Huibin Li Xi an Jiaotong University {zhangzihui247, ccy2017, shuangxu}@stu.xjtu.edu.cn, huibinli@xjtu.edu.cn |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements or links indicating that source code for the methodology is openly available. |
| Open Datasets | Yes | To show the effectiveness of proposed method, two low-quality databases, i.e., Lock3DFace (Zhang et al. 2016) and IIIT-D (Goswami et al. 2013) and four high-quality databases, namely FRGC v2 (Phillips et al. 2005), Bosphorus (Savran et al. 2008), BU3D-FE (Yin et al. 2006) and BU4D-FE (Yin et al. 2008) are used in our experiments. |
| Dataset Splits | Yes | The training set contains totally 39,702 (3054 * 12+3054) frames. The other videos of four types (FE, OC, PS and TM) are used as the test subsets. ...The second protocol divides training and test set according to subjects. We respectively select all the frames of the first 100, 200, 300, and 400 subjects as the training set and the remaining 409, 309, 209, and 109 subjects are used for test. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'Adam' as an optimizer but does not specify any software names with version numbers or specific libraries. |
| Experiment Setup | Yes | Our deep model is optimized by the Adam with batch size of 300 for all 100 epochs. The learning rate is initially set to 1e-4 and reduced by a factor of 10 per 3000 iterations. We extract a vector c of 10-dimension from the last feature map of Led3D network by a fully connected layer as the condition vector of CNF. The CNF module consists of 3 fully connected layers with softplus activations. λ in Eq.(12) is chosen as 5e-3. |