3D Geometric Shape Assembly via Efficient Point Cloud Matching

Authors: Nahyuk Lee, Juhong Min, Junha Lee, Seungwook Kim, Kanghee Lee, Jaesik Park, Minsu Cho

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed PMTR on the large-scale 3D geometric shape assembly benchmark dataset of Breaking Bad and demonstrate its superior performance and efficiency compared to state-of-the-art methods. The experiments demonstrate that our method outperforms existing approaches by a significant margin while being computationally efficient compared to the baselines.
Researcher Affiliation Academia 1Department of Computer Science and Engineering, POSTECH, Pohang, Korea 2Graduate School of Artificial Intelligence, POSTECH, Pohang, Korea 3Department of Computer Science and Engineering, Seoul National University, Seoul, Korea 4Interdisciplinary Program in Artificial Intelligence, Seoul National University, Seoul, Korea.
Pseudocode No The paper describes the proposed method using text and mathematical equations but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions a project page: "Project page: https://nahyuklee.github.io/pmtr." However, it does not contain an unambiguous statement that the source code for the described methodology is released, nor does it provide a direct link to a source-code repository.
Open Datasets Yes In our experiments, we utilize the Breaking Bad dataset (Sell an et al., 2022) which is a large-scale dataset of fractured objects for the task of geometric shape assembly, which consists of over 1 million fractured objects simulated from 10K meshes of Part Net (Mo et al., 2019) and Thingi10k (Zhou & Jacobson, 2016).
Dataset Splits No The paper mentions using the Breaking Bad dataset for training and evaluation but does not provide specific details on training, validation, or test dataset splits (e.g., percentages, sample counts, or explicit references to predefined splits).
Hardware Specification Yes Experiments were conducted on a machine with Intel(R) Xeon(R) Gold 6342 CPU @ 2.80GHz and NVIDIA Ge Force RTX 3090 GPU.
Software Dependencies No The paper states "We implement our PMTR using Py Torch Lightning (Falcon & team, 2019)." but does not provide specific version numbers for PyTorch Lightning or any other key software dependencies.
Experiment Setup Yes For all experiments, except the ones include Geo Transformer, we use ADAM (Kingma & Ba, 2015) optimizer with a learning rate of 1 10 3 for 150 epochs. For Geo Transformer, we use the identical settings but only reduce the learning rate to 1 10 4 to prevent model divergence. We utilize KPConv-FPN (Thomas et al., 2019) with subsampling radius of 0.01. The number of attention heads Nh is set to 4. Refer to Tab. 8 the rest of hyperparameters.