Fast Inter-frame Motion Prediction for Compressed Dynamic Point Cloud Attribute Enhancement

Authors: Wang Liu, Wei Gao, Xingming Mu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that the proposed method can greatly improve the quality of compressed dynamic point cloud and provide a fast and efficient motion prediction plugin for large-scale point cloud. For dynamic point cloud attribute with severely compressed artifact, our proposed DAEMP method achieves up to 0.52d B (PSNR) performance gain. Moreover, the proposed IFMP module has a certain real-time processing ability for calculating the motion offset between dynamic point cloud frame.
Researcher Affiliation Academia Wang Liu1, Wei Gao1,2*, Xingming Mu1 1 School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University 2 Peng Cheng Laboratory
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access to source code for the methodology described.
Open Datasets Yes Following the previous work for deep dynamic point cloud compression(D-DPCC), we choose 8i Voxelized Full Bodies (8i VFB)(d Eon et al. 2017) for model training and evulation.
Dataset Splits No The paper specifies training and testing data splits ('The first and third sequences are selected for model training, while the others for testing. All 600 frames of training data and 600 frames of test data are compressed in the G-PCC reference software TMC13v22.'), but does not explicitly mention a separate validation set or its split details.
Hardware Specification Yes The batch size is set to 16 and the model is trained on NVIDIA Tesla V100 GPU.
Software Dependencies No The paper mentions 'Py Torch platform with Minkowski Engine' and 'Adam optimizer' but does not specify exact version numbers for these software dependencies.
Experiment Setup Yes Learning rate is initially set to 5 10 4 and linearly decays to 2 10 4 after 200 epochs. Then, it linearly reduces to 1 10 4 after 200 epochs. Data augmentation skills are not involved in our experiments. The batch size is set to 16 and the model is trained on NVIDIA Tesla V100 GPU.