UCSG-NET- Unsupervised Discovering of Constructive Solid Geometry Tree

Authors: Kacper Kania, Maciej Zieba, Tomasz Kajdanowicz

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method on 2D and 3D autoencoding tasks. We show that the predicted parse tree representation is interpretable and can be used in CAD software. We evaluate our approach on 2D autoencoding and 3D autoencoding tasks, and compare the results with state-of-the-art reference approaches for object reconstruction: CSG-NET [8] for the 2D task, and VP [37], SQ [37], BAE [41] and BSP-NET [5] for 3D tasks.
Researcher Affiliation Collaboration Kacper Kania1 Maciej Zi eba1,2 Tomasz Kajdanowicz1 1Wrocław University of Science and Technology 2Tooploox kacp.kania@gmail.com Now at Warsaw University of Technology
Pseudocode No The paper describes the model architecture and processes but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes 1We published our code at https://github.com/kacperkan/ucsgnet
Open Datasets Yes For this experiment, we used CAD dataset [7] consisting of 8,000 CAD shapes in three categories: chair, desk, and lamps. For the 3D autoencoding task, we train the model on 643 volumes of voxelized shapes in the Shape Net dataset. The data was provided by Chen et al. [5] and bases on the 13 most common classes in the Shape Net dataset [13].
Dataset Splits Yes We compare our method with the CSG-NETSTACK [8], improved version of the CSG-NET [7], on the same validation split. To speed up the training, we applied early stopping heuristic and stop after 40 epochs of no improvement on the L total loss. Finally, we show an example parse tree in Figure 6, used to reconstruct an example shape from the validation set.
Hardware Specification Yes Training takes about two days on Nvidia Titan RTX GPU.
Software Dependencies No The paper mentions the 'libigl library' and 'Adam' optimizer but does not specify version numbers for any software components, which is required for reproducibility.
Experiment Setup Yes We set 2 CSG layers for our method, where each outputs 16 shapes in total. The decoder predicts parameters of 16 circles and 16 rectangles. We used 5 CSG layers to increase the diversity of predictions and set 64 parameters of spheres and boxes to handle the complex nature of the dataset. We sample 16384 points as a ground truth with a higher probability of sampling near the surface. To speed up the training, we applied early stopping heuristic and stop after 40 epochs of no improvement on the L total loss. We set λT = λα = 0.1. where we set λτ = 0.1 for all experiments. During experiments, we initialize them to α = 1 and τ (l) = 2.