PolyDiffuse: Polygonal Shape Reconstruction via Guided Set Diffusion Models

Authors: Jiacheng Chen, Ruizhi Deng, Yasutaka Furukawa

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have evaluated our approach for reconstructing two types of polygonal shapes: floorplan as a set of polygons and HD map for autonomous cars as a set of polylines. Through extensive experiments on standard benchmarks, we demonstrate that Poly Diffuse significantly advances the current state of the art and enables broader practical applications.
Researcher Affiliation Academia Jiacheng Chen Ruizhi Deng Yasutaka Furukawa Simon Fraser University
Pseudocode Yes Algorithm 1 Guidance training (stage 1) ... Algorithm 2 Denoising training (stage 2)
Open Source Code Yes The code and data are available on our project page: https://poly-diffuse.github.io.
Open Datasets Yes Structured3D dataset [48] contains 3500 indoor scenes (3000/250/250 for training/validation/test) with diverse house floorplans. ... The nu Scenes dataset [2] provides a standard benchmark for HD map reconstruction.
Dataset Splits Yes Structured3D dataset [48] contains 3500 indoor scenes (3000/250/250 for training/validation/test) with diverse house floorplans.
Hardware Specification Yes We have implemented the system with Py Torch and used a machine with 4 NVIDIA RTX A5000 GPUs.
Software Dependencies No The paper mentions "Py Torch" but does not specify a version number. It also mentions borrowing the codebase of "Karras et al.[19]" without specific versions for that framework or other libraries.
Experiment Setup Yes The loss weights for the guidance training are λ1 = 1, λ2 = 0.05, λ3 = 0.1. ... Adam optimizer is employed with a learning rate of 2e-4 and a weight decay rate of 1e-4. ... We employ an Adam optimizer with a base learning rate of 6e-4 and a weight decay factor of 1e-4. A cosine learning rate scheduler is used.