Class-agnostic Reconstruction of Dynamic Objects from Videos

Authors: Zhongzheng Ren, Xiaoming Zhao, Alex Schwing

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We study the efficacy of REDO in extensive experiments on synthetic RGBD video datasets SAIL-VOS 3D and Deforming Things4D++, and on real-world video data 3DPW. We find REDO outperforms state-of-the-art dynamic reconstruction methods by a margin. In ablation studies we validate each developed component.
Researcher Affiliation Academia Zhongzheng Ren , Xiaoming Zhao , Alexander G. Schwing University of Illinois at Urbana-Champaign
Pseudocode No The paper describes the steps of its method in paragraph form and through diagrams, but it does not include any clearly labeled pseudocode blocks or algorithms.
Open Source Code Yes https://jason718.github.io/redo
Open Datasets Yes We study the efficacy of REDO in extensive experiments on synthetic RGBD video datasets SAIL-VOS 3D and Deforming Things4D++, and on real-world video data 3DPW.
Dataset Splits Yes For evaluation, we sample 291 clips from 78 validation videos. We further hold out 2 classes (dog and gorilla) as an unseen test set. ... For evaluation, we create a validation set of 152 clips and a test set of 347 clips. ... The dataset contains 60 videos (24 training, 12 validation, and 24 testing).
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions software components like 'Adam optimizer' and 'neural-ODE solver,' but it does not specify any version numbers for these or other software dependencies.
Experiment Setup Yes In each training iteration we sample 2048 query points for shape reconstruction and 512 vertices for learning of temporal coherence. We train REDO end-to-end using the Adam optimizer [39] for 60 epochs with a batch size of 8. The learning rate is initialized to 0.0001 and decayed by 10 at the 40th and 55th epochs.