Towards Fine-Grained HBOE with Rendered Orientation Set and Laplace Smoothing

Authors: Ruisi Zhao, Mingming Li, Zheng Yang, Binbin Lin, Xiaohui Zhong, Xiaobo Ren, Deng Cai, Boxi Wu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate the effectiveness of our method in the benchmarks with extensive experiments and show that our method outperforms state-of-the-art.
Researcher Affiliation Collaboration 1State Key Lab of CAD&CG, Zhejiang University 2FABU Inc 3School of Software Technology, Zhejiang University 4Ningbo Zhoushan Port Group Co.,Ltd., Ningbo, China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Project is available at: https://github.com/Whalesong-zrs/Towards Fine-grained-HBOE.
Open Datasets Yes As the largest and most valuable real-scene dataset, the MEBOW dataset contains around 130K training samples and has rich background environments. It will be used for both training and testing. Additionally, we will incorporate the RMOS dataset as supplementary training data and evaluate its value on the MEBOW test set. The data in the TUD dataset has clear and complete human body shapes and provides continuous labels.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for training, validation, and testing.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies Yes For the experiments in Tab. 1, we implenment these backbones based on the mmpose (Contributors 2020).
Experiment Setup Yes Input instances are cropped and resized to 256 192 while applying data augmentation techniques including flipping and scaling. For OEFormer training, we use 80 epochs with a batch size of 256 and the Adam W optimizer with initial learning rate 1 10 5. We set β to 0.2 and σ to 2.0 for the loss function.