Discovering Design Concepts for CAD Sketches

Authors: Yuezhi Yang, Hao Pan

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments on large-scale sketch datasets [17]. The learned sketch concepts show that they provide modular interpretation of design sketches. The network can also be trained on incomplete input sketches and learn to auto-complete them. Comparisons with state-of-the-art approaches that solve sketch graph generation through autoregressive models show that the modular sketch concepts learned by our approach enable more accurate and interpretable completion results.
Researcher Affiliation Collaboration Yuezhi Yang The University of Hong Kong Microsoft Research Asia yzyang@cs.hku.hk Hao Pan Microsoft Research Asia haopan@microsoft.com
Pseudocode No The paper includes a "List 1" which defines a domain-specific language syntax, but it is not pseudocode or a clearly labeled algorithm block.
Open Source Code Yes We defer network details to the supplementary and open-source code and data to facilitate future research3. 3URL to code and data: https://github.com/yyuezhi/Sketch Concept
Open Datasets Yes Following previous works [6, 13, 18], we adopt the Sketch Graphs dataset [17] which contains millions of real-world CAD sketches for training and evaluation. [17] Ari Seff, Yaniv Ovadia, Wenda Zhou, and Ryan P. Adams. Sketch Graphs: A large-scale dataset for modeling relational geometry in computer-aided design. In ICML 2020 Workshop on Object-Oriented Learning, 2020.
Dataset Splits No We filter the data by removing trivially simple sketches and duplicates, and limit the sketch complexity such that the number of primitives and constraints is within [20, 50]. As a result, we obtain around 1 million sketches and randomly split them into 950k for training and 50k for testing.
Hardware Specification No The paper states: "Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] Please see Appendix." However, the provided text does not include the appendix, thus no specific hardware details are available in the main body.
Software Dependencies No The paper mentions "we implement the parameter network as a transformer decoder in a similar way as [2]" but does not specify software names with version numbers for reproducibility.
Experiment Setup Yes The entire model is trained end-to-end by reconstruction and modularity objectives. In particular, we design loss functions that measure differences between the generated and groundtruth sketch graphs, in terms of both per-element attributes and pairwise references. Given our explicit modeling of encapsulated structures of the learned concepts, we can further enhance the modularity of the generation by introducing a bias loss that encourages in-concept references. We denote the average loss of all generated terms as Lrecon. Ltotal = wrecon Lrecon + wsharp Lsharp + wvq Lvq + wbias Lbias, (7) where we empirically use weights wrecon = 1, wsharp = 20, wvq = 1, wbias = 25 throughout all experiments unless otherwise specified in the ablation studies.