StackFLOW: Monocular Human-Object Reconstruction by Stacked Normalizing Flow with Offset

Authors: Chaofan Huo, Ye Shi, Yuexin Ma, Lan Xu, Jingyi Yu, Jingya Wang

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results show that our method achieves impressive results on two challenging benchmarks, BEHAVE and Inter Cap datasets.
Researcher Affiliation Academia Chaofan Huo1,2 , Ye Shi1,2 , Yuexin Ma1,2 , Lan Xu1,2 , Jingyi Yu1,2 and Jingya Wang 1,2 1Shanghai Tech University 2Shanghai Engineering Research Center of Intelligent Vision and Imaging {huochf, shiye, mayuexin, xulan1, yujingyi, wangjingya}@shanghaitech.edu.cn
Pseudocode No The paper describes the methods in text and uses figures to illustrate the framework, but it does not contain any formal pseudocode or algorithm blocks.
Open Source Code Yes Our code has been publicly available at https://github.com/huochf/Stack FLOW.
Open Datasets Yes Dataset. We conduct experiments on two indoor datasets: BEHAVE [Bhatnagar et al., 2022] and Inter Cap [Huang et al., 2022b].
Dataset Splits No The paper mentions 'We follow the official train/test split to train and test our method' for BEHAVE and 'We randomly select 20% sequences for testing and the rest for training' for Inter Cap. It references 'validation' in the loss function, but does not specify a validation split or provide details on how the validation set was created or used for hyperparameter tuning.
Hardware Specification Yes The time is tested on a single NVIDIA Ge Force RTX 2080 Ti GPU.
Software Dependencies No The paper does not explicitly list specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, CUDA 11.x).
Experiment Setup No The paper describes the overall framework, loss functions, and optimization process, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or other detailed system-level training settings in the main text.