TPU-GAN: Learning temporal coherence from dynamic point cloud sequences
Authors: Zijie Li, Tianqin Li, Amir Barati Farimani
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on point cloud sequences from two different domains: particles in the fluid dynamical system and human action scanned data. The quantitative and qualitative evaluation demonstrates the effectiveness of our method on upsampling task as well as learning temporal coherence from irregular point cloud sequences. |
| Researcher Affiliation | Academia | Zijie Li Department of Mechanical Engineering Carnegie Mellon University Tianqin Li School of Computer Science Carnegie Mellon University Amir Barati Farimani Department of Mechanical Engineering Carnegie Mellon University |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper. |
| Open Datasets | Yes | The second dataset is from scanned human action the MSR-Action3D dataset (Li et al., 2010) |
| Dataset Splits | Yes | We use 20 sequences as training data, and test on the rest 4 sequences. |
| Hardware Specification | Yes | All the training and experiments are run on a platform equiped with a single GTX-1080Ti. |
| Software Dependencies | Yes | We implement our model in Py Torch 1.7.1. |
| Experiment Setup | Yes | We train all our model for 100k gradient updates using Adam optimizer (Kingma & Ba, 2017), which took approximately 30 hours on the Fluid dataset and 15 hours on the MSR-Action 3D dataset. The upsampling ratio is set to be r = 16 on the action dataset. In practice we set this threshold value to a small number (e.g. ϵ = 0.011). |