3D Assembly Completion
Authors: Weihao Wang, Rufeng Zhang, Mingyu You, Hongjun Zhou, Bin He
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive comparisons with several baseline methods and ablation studies, demonstrating the effectiveness of the proposed method. |
| Researcher Affiliation | Academia | Weihao Wang, Rufeng Zhang, Mingyu You*, Hongjun Zhou, Bin He College of Electronic and Information Engineering, Tongji University, Shanghai 201804 China {wwhtju,myyou,zhouhongjun,hebin}@tongji.edu.cn, cxrfzhang@foxmail.com |
| Pseudocode | No | The paper describes its methods in narrative text and uses mathematical equations, but it does not contain structured pseudocode or algorithm blocks that are clearly labeled. |
| Open Source Code | No | The paper does not include any explicit statements about releasing source code or provide links to a code repository. |
| Open Datasets | Yes | We evaluate the proposed method on the Part Net (Mo et al. 2019) dataset, a large-scale synthetic dataset of 3D shapes annotated with instance-level and hierarchical 3D part information. |
| Dataset Splits | Yes | We choose the three largest categories of 6,323 chairs, 8,218 tables, and 2,207 lamps with the most fine-grained level of segmentation and follow the default train/val/test splits of 70%/20%/10%. |
| Hardware Specification | No | We train Fi T with the Adam W optimizer with an initial learning rate of 1.5 10 4 for 500 epochs on 8 GPUs. The paper mentions the number of GPUs but lacks specific details on their model or other hardware components. |
| Software Dependencies | No | The paper mentions optimizers and uses references to other works (e.g., Point Net, Transformers) but does not provide specific version numbers for any software dependencies, libraries, or programming languages used in the implementation. |
| Experiment Setup | Yes | We train Fi T with the Adam W optimizer with an initial learning rate of 1.5 10 4 for 500 epochs on 8 GPUs. Batch size is set to 64. [...] We set the original toolkit with a size of M = 10. For the blended toolkit, we set M = 30, m1 = 10 k and m2 = 20. [...] where the threshold τp is set to 0.01. [...] The threshold τc is set to 0.01. |