Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

PuzzleFusion++: Auto-agglomerative 3D Fracture Assembly by Denoise and Verify

Authors: Zhengqing Wang, Jiacheng Chen, Yasutaka Furukawa

ICLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on the Breaking Bad dataset show that Puzzle Fusion++ outperforms all other state-of-the-art techniques by significant margins across all metrics In particular by over 10% in part accuracy and 50% in Chamfer distance.
Researcher Affiliation Collaboration Zhengqing Wang1 , Jiacheng Chen1 , Yasutaka Furukawa1,2 1 Simon Fraser University, 2 Wayve EMAIL
Pseudocode No The paper describes the methodology using architectural diagrams (Figure 2, 3) and prose, but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes The code and models are available on our project page: https://puzzlefusion-plusplus.github.io.
Open Datasets Yes Following recent works (Wu et al., 2023; Lu et al., 2023), we use the Breaking Bad dataset (Sellรกn et al., 2022).
Dataset Splits Yes Specifically, 34,075 assemblies from 407 objects in the everyday subset are for training. 7,679 assemblies from 91 objects in the everyday subset and 3,651 assemblies from 40 uncategorized objects in the artifact subset are for testing.
Hardware Specification Yes We use a server with four NVIDIA RTX A6000 GPUs for experiments.
Software Dependencies No The paper mentions using 'pytorch3d.ops' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes The denoiser is trained for 2000 epochs with a batch size of 64. The initial learning rate is 2e-4 and decays by a factor of 10 at 1200 and 1700 epochs. The Adam W optimizer is used with a decay factor of 1e-6. The verifier is trained for 100 epochs using the same training settings as the denoiser.