SketchGen: Generating Constrained CAD Sketches

Authors: Wamiq Para, Shariq Bhat, Paul Guerrero, Tom Kelly, Niloy Mitra, Leonidas J. Guibas, Peter Wonka

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our model by demonstrating constraint prediction for given sets of primitives and full sketch generation from scratch, showing that our approach significantly out performs the state-of-the-art in CAD sketch generation. We evaluate Sketch Gen on one of the largest publicly-available constrained sketch datasets Sketch Graph [27]. We compare with a set of existing and contemporary works, and report noticeable improvement in the quality of the generated sketches. We also present ablation studies to evaluate the efficacy of our various design choices.
Researcher Affiliation Collaboration Wamiq Reyaz Para1 Shariq Farooq Bhat 1 Paul Guerrero2 Tom Kelly3 Niloy Mitra2,4 Leonidas Guibas5 Peter Wonka1 1 KAUST 2 Adobe Research 3 University of Leeds 4 University College London 5 Stanford University
Pseudocode No The paper describes the model architecture and generation process in detail, but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code or provide a link to a code repository for their method.
Open Datasets Yes We train and evaluate on the recent Sketchgraphs dataset [27], which contains 15 million real-world CAD sketches (licensed without any usage restriction), obtained from On Shape [2], a web-based CAD modeling platform.
Dataset Splits Yes We keep aside a random subset of 50k samples as validation set and 86k samples as test set.
Hardware Specification Yes Training was performed for 40 epochs on 8 V100 GPUs for the primitive model and for 80 epochs on 8 A100 GPUs for the constraint model.
Software Dependencies No The paper states: "We implemented our models in Py Torch [24], using GPT-2 [25] like Transformer blocks." However, it does not provide specific version numbers for PyTorch or any other libraries used.
Experiment Setup Yes For primitive generation, we use 24 blocks, 12 attention heads, an embedding dimension of 528 and a batch size of 544. For constraint generation, the encoder has 22 layers and the pointer network 16 layers. Both have 12 attention heads, an embedding dimension of 264 and use a batch size of 1536. We use the Adam optimizer [12] with a learning rate of 0.0001.