Unsupervised Learning of Shape Programs with Repeatable Implicit Parts
Authors: Boyang Deng, Sumith Kulal, Zhengyang Dong, Congyue Deng, Yonglong Tian, Jiajun Wu
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical studies show that Pro GRIP outperforms existing structured representations in both shape reconstruction fidelity and segmentation accuracy of semantic parts. |
| Researcher Affiliation | Academia | Boyang Deng1, Sumith Kulal1, Zhengyang Dong1 Congyue Deng1 Yonglong Tian2 Jiajun Wu1 1Stanford University, 2MIT, equal contributions |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | We plan to release our source code upon publication. |
| Open Datasets | Yes | We conduct all our experiments using the Shape Net [6] dataset following Shape Net Terms of Use. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, and test sets. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers). |
| Experiment Setup | Yes | In our experiments, we use λs = 1, λv = 0.2, and λe = 0.8 for all categories. |