CADParser: A Learning Approach of Sequence Modeling for B-Rep CAD
Authors: Shengdi Zhou, Tianyi Tang, Bin Zhou
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that our method can compete with the existing state-of-the-art methods quantitatively and qualitatively. |
| Researcher Affiliation | Academia | Shengdi Zhou1 , Tianyi Tang2 and Bin Zhou1 1State Key Laboratory of Virtual Reality Technology and Systems, Beihang University 2University of Waterloo {zhoushengdi9, tianyitangdhr}@gmail.com, zhoubin@buaa.edu.cn |
| Pseudocode | No | The paper describes the method and network architecture using text and diagrams but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states 'Data is available at https://drive.google.com/CADParser Data' but does not mention the availability of the source code for CADParser. |
| Open Datasets | Yes | Data is available at https://drive.google.com/CADParser Data |
| Dataset Splits | No | The paper states 'We split our collected models into training and test set, where the test set counts 1000.' but does not explicitly mention a separate validation dataset split or its size. |
| Hardware Specification | Yes | We train our networks for 100 epochs with a total batch size of 96 on one 1080Ti GPU |
| Software Dependencies | No | The paper mentions using the AdamW optimizer but does not specify version numbers for any key software components or libraries (e.g., Python, PyTorch, TensorFlow, CUDA). |
| Experiment Setup | Yes | We use the Adam W[Loshchilov and Hutter, 2017] optimizer with an initial learning rate 10 3, reduced by a factor of 0.9 every 30 epochs and a linear warmup period of 10 initial epochs. We use a dropout rate of 0.1 in all transformer layers and a gradient clipping of 1.0. We train our networks for 100 epochs with a total batch size of 96 on one 1080Ti GPU |