Encoding Auxiliary Information to Restore Compressed Point Cloud Geometry

Authors: Gexin Liu, Jiahao Zhu, Dandan Ding, Zhan Ma

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that Aux GR notably outperforms existing methods in both static and dynamic coding scenarios. Moreover, our framework enables the flexible incorporation of auxiliary information under computation constraints, which is attractive to real applications.
Researcher Affiliation Academia Gexin Liu1 , Jiahao Zhu1 , Dandan Ding1 and Zhan Ma2 1School of Information Science and Technology, Hangzhou Normal University, China 2School of Electronic Science and Engineering, Nanjing University, China {liugexin, zhujiahao23}@stu.hznu.edu.cn, dandanding@hznu.edu.cn, mazhan@nju.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. Methods are described in text and through figures.
Open Source Code No The paper does not include an unambiguous statement that the authors are releasing their source code, nor does it provide a direct link to a repository containing the code for their method. Footnotes provide links to other projects, not the authors' own implementation.
Open Datasets Yes Training Dataset. For static coding, we built different training datasets for solid, dense, and sparse point clouds using the Shape Net dataset [Chang et al., 2015]. ... For dynamic coding, we followed the Common Test Conditions (CTC) [MPEG, 2022] recommended by the international standardization committee MPEG AI-PCC group to create our training dataset. Specifically, the 8i Voxelized Full Bodies (8i VFB) [d Eon et al., 2017] dataset with 10-bit geometry precision is used.
Dataset Splits No The paper does not explicitly provide training/test/validation dataset splits (e.g., exact percentages or sample counts). It discusses training and testing datasets separately without specifying a validation split.
Hardware Specification Yes Our project is implemented using the Py Torch and Minkowski Eninge [Choy et al., 2019] on a computer with an Intel i7-8700K CPU, 32 GB memory, and NVIDIA RTX 4090.
Software Dependencies No The paper mentions 'Py Torch' and 'Minkowski Eninge' as software used for implementation, but it does not provide specific version numbers for these or any other ancillary software components.
Experiment Setup Yes Adam is used for network model optimization and parameters β1 and β2 are set to 0.9 and 0.999, respectively. The learning rate decays from 8e-4 to 1e-4 every 4000 steps. Our static and dynamic models are trained for 64,000 and 24,000 steps, respectively.