D-DPCC: Deep Dynamic Point Cloud Compression via 3D Motion Prediction
Authors: Tingyu Fan, Linyao Gao, Yiling Xu, Zhu Li, Dong Wang
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental result shows that the proposed D-DPCC framework achieves an average 76% BD-Rate (Bjontegaard Delta Rate) gains against state-of-the-art Videobased Point Cloud Compression (V-PCC) v13 in inter mode. |
| Researcher Affiliation | Collaboration | Tingyu Fan 1 , Linyao Gao 1 , Yiling Xu1 , Zhu Li2 and Dong Wang3 1Cooperative Medianet Innovation Center, Shanghai Jiao Tong University 2University of Missouri, Kansas City 3Guangdong OPPO Mobile Telecommunications Corp., Ltd. {woshiyizhishapaozi, linyaog, yl.xu}@sjtu.edu.cn, zhu.li@ieee.org, wangdong7@oppo.com |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | We train the proposed model using Owlii Dynamic Human DPC dataset [Keming et al., 2018], containing 4 sequences with 2400 frames... Following the MPEG common test condition (CTC), we evaluate the performance of the proposed D-DPCC framework using 8i Voxelized Full Bodies (8i VFB) [d Eon et al., 2017], containing 4 sequences with 1200 frames. |
| Dataset Splits | No | The paper mentions training on the Owlii dataset and evaluating on the 8i VFB dataset, but it does not specify a validation dataset or split for hyperparameter tuning. |
| Hardware Specification | Yes | We conduct all the experiments on a Ge Force RTX 3090 GPU with 24GB memory. |
| Software Dependencies | No | The paper mentions using an "Adam [Kingma and Ba, 2015] optimizer" but does not specify any software libraries with version numbers. |
| Experiment Setup | Yes | We train D-DPCC with λ =3, 4, 5, 7, 10 for each rate point. We utilize an Adam [Kingma and Ba, 2015] optimizer with β = (0.9, 0.999), together with a learning rate scheduler with a decay rate of 0.7 for every 15 epochs. A two-stage training strategy is applied for each rate point. Specifically, for the first five epochs, λ is set as 20 to accelerate the convergence of the point cloud reconstruction module; then, the model is trained for another 45 epochs with λ set to its original value. The batch size is 4 during training. |