D$^2$CSG: Unsupervised Learning of Compact CSG Trees with Dual Complements and Dropouts
Authors: Fenggen Yu, Qimin Chen, Maham Tanveer, Ali Mahdavi Amiri, Hao Zhang
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate both quantitatively and qualitatively that D2CSG produces compact CSG reconstructions with superior quality and more natural primitives than all existing alternatives, especially over complex and high-genus CAD shapes. |
| Researcher Affiliation | Collaboration | Fenggen Yu1 Qimin Chen1 Maham Tanveer1 Ali Mahdavi Amiri1 Hao Zhang1,2 1Simon Fraser University 2Amazon |
| Pseudocode | No | The paper includes mathematical equations and descriptions of processes, but it does not contain any formally labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about the release of its source code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We demonstrate both quantitatively and qualitatively that our network, when trained on ABC [21] or Shape Net [2], produces CSG reconstructions with superior quality, more natural trees, and better quality-compactness trade-off than all existing alternatives [3, 6, 57]. |
| Dataset Splits | No | The paper specifies a test set but does not provide explicit details about a separate validation set or the percentages/counts for a training split when evaluating its generalized model. It notes it "train[s] D2CSG per shape" which implies a different setup than traditional train/validation splits for a single model. |
| Hardware Specification | Yes | All experiments were performed using an Nvidia Ge Force RTX 2080 Ti GPU. |
| Software Dependencies | No | The paper mentions software components and frameworks indirectly, but it does not provide specific version numbers for the software dependencies used in its implementation, which is necessary for reproducibility. |
| Experiment Setup | Yes | In our experiments to be presented in this section, we set the number of maximum primitives as p = 512 and the number of maximum intersections as c = 32 for each branch to support complex shapes. The size of our latent code is 256 and a two-layer MLP is used to predict the parameters of the primitives from the input feature code. We train D2CSG per shape by optimizing the latent code, primitive prediction network, intersection layer, and union layer. To evaluate D2CSG against prior methods, which all require an additional time-consuming optimization at test time to achieve satisfactory results (e.g., 30 min per shape for CSG-Stump), we have randomly selected a moderately sized subset of shapes as test set for evaluation: 500 shapes from ABC, and 50 from each of the 13 categories of Shape Net (650 shapes in total). In addition, we ensured that 80% of the selected shapes from ABC have genus larger than two with more than 10K vertices to include complex structures. All experiments were performed using an Nvidia Ge Force RTX 2080 Ti GPU. |