Triangulation Residual Loss for Data-efficient 3D Pose Estimation
Authors: Jiachen Zhao, Tao Yu, Liang An, Yipeng Huang, Fang Deng, Qionghai Dai
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This paper presents Triangulation Residual loss (TR loss) for multiview 3D pose estimation in a data-efficient manner. Existing 3D supervised models usually require large-scale 3D annotated datasets, but the amount of existing data is still insufficient to train supervised models to achieve ideal performance, especially for animal pose estimation. Experiments on animals such as mice demonstrate our TR loss s data-efficient training ability. |
| Researcher Affiliation | Academia | Jiachen Zhao Tsinghua University Beijing, China 100084 zhao_jiachen@163.com Tao Yu Tsinghua University Beijing, China 100084 ytrock@126.com Liang An Tsinghua University Beijing, China 100084 al17@mails.tsinghua.edu.cn Yipeng Huang Beijing Institute of Technology Beijing, China 100081 huangyipeng@bit.edu.cn Fang Deng Beijing Institute of Technology Beijing, China 100081 dengfang@bit.edu.com Qionghai Dai Tsinghua University Beijing, China 100084 qhdai@mails.tsinghua.edu.cn |
| Pseudocode | No | The paper describes the model and mathematical formulations, but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code has been made available at https://github.com/zhaojiachen1994/Triangulation-Residual-Loss. |
| Open Datasets | Yes | We conduct human pose estimation experiments on the Human3.6M dataset. The Human3.6M dataset [3] contains 3.6 million frames of single-person activity captured by 4 synchronized cameras. We conduct experiments on three datasets to evaluate our method: Cal MS21, Dannce, and THmouse (THM, a dataset we collected). Cal MS21 [34] is a large-scale 2D mouse dataset... Dannce dataset [35] consists of 1032 labeled frames... |
| Dataset Splits | No | The paper specifies training and test splits, but does not explicitly mention a validation split or how it's used. |
| Hardware Specification | Yes | The models are trained on 1 NVIDIA 3090 GPU with the Adam optimizer and an initial learning rate of 1e 5. |
| Software Dependencies | No | The paper mentions implementing the model based on the 'mmpose library [29]' but does not provide specific version numbers for mmpose or any other software dependencies. |
| Experiment Setup | Yes | The models are trained on 1 NVIDIA 3090 GPU with the Adam optimizer and an initial learning rate of 1e 5. The loss tradeoff parameters are set to 1, so all losses have the same effect. |