Hierarchical Neural Coding for Controllable CAD Model Generation
Authors: Xiang Xu, Pradeep Kumar Jayaraman, Joseph George Lambourne, Karl D.D. Willis, Yasutaka Furukawa
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate superior performance on conventional tasks such as unconditional generation while enabling novel interaction capabilities on conditional generation tasks. |
| Researcher Affiliation | Collaboration | 1Simon Fraser University, Canada 2Autodesk Research. Correspondence to: Xiang Xu <xuxiangx@sfu.ca>. |
| Pseudocode | No | The paper includes architectural diagrams (e.g., Figure 4) and descriptions of model components, but no formal pseudocode blocks or algorithms labeled as such. |
| Open Source Code | Yes | The code is available at https: //github.com/samxuxiang/hnc-cad. |
| Open Datasets | Yes | We use the large-scale Deep CAD dataset (Wu et al., 2021) with ground-truth sketch-and-extrude models. Deep CAD contains 178,238 sketch-and-extrude models with a split of 90% train, 5% validation, and 5% test samples. |
| Dataset Splits | Yes | Deep CAD contains 178,238 sketch-and-extrude models with a split of 90% train, 5% validation, and 5% test samples. |
| Hardware Specification | Yes | Models are trained on an Nvidia RTX A6000 GPU with a batch size of 256. |
| Software Dependencies | No | The paper mentions using the Adam W optimizer and Transformer backbones, but does not specify version numbers for any software, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow, CUDA versions). |
| Experiment Setup | Yes | Input embedding dimension is 256. Feed-forward dimension is 512. Dropout rate is 0.1. Each Transformer network in the generation module has 6 layers with 8 attention heads. The codebook learning networks have 4 layers. We use the Adam W (Loshchilov & Hutter, 2018) optimizer with a learning rate of 0.001 after linear warm-up for 2000 steps. |