AWR: Adaptive Weighting Regression for 3D Hand Pose Estimation

Authors: Weiting Huang, Pengfei Ren, Jingyu Wang, Qi Qi, Haifeng Sun11061-11068

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive exploration experiments are conducted to validate the effectiveness and generality of AWR under various experimental settings, especially its usefulness for different types of dense representation and input modality. Our method outperforms other state-of-the-art methods on four publicly available datasets, including NYU, ICVL, MSRA and HANDS 2017 dataset.
Researcher Affiliation Collaboration 1State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, P.R. China 2EBUPT Information Technology Co., Ltd., Beijing 100191, P.R. China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/Elody07/AWR-Adaptive-Weighting-Regression.
Open Datasets Yes We conduct experiments on four publicly available hand pose datasets: NYU dataset (Tompson et al. 2014), ICVL dataset (Tang et al. 2014a), MSRA dataset (Sun et al. 2015) and HANDS 2017 dataset (Yuan et al. 2017).
Dataset Splits Yes NYU dataset contains 72K and 8K frames for training and evaluation respectively.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments, such as specific GPU or CPU models.
Software Dependencies No The paper mentions "implemented with Py Torch using Adam (Kingma and Ba 2015) optimizer" but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes Our method is implemented with Py Torch using Adam (Kingma and Ba 2015) optimizer with initial learning rate of 0.001 and weight decay of 0.0005. The batch size is set to 32.