Enhancing Implicit Shape Generators Using Topological Regularizations
Authors: Liyan Chen, Yan Zheng, Yang Li, Lohit Anirudh Jagarapu, Haoxiang Li, Hao Kang, Gang Hua, Qixing Huang
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on Shape Net show that our approach leads to much better generalization behavior than state-of-the-art implicit shape generators. ... This section presents an experimental evaluation of our approach. We begin with the experimental setup in Section 6.1. We then present an analysis of the results in Section 6.2. Finally, we describe an ablation study in Section 6.3. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science, The University of Texas at Austin, Austin TX 2Tsinghua Shenzhen International Graduate School, Info Building 1108A, Shenzhen, China 3Wormpex AI Research, 500 108th Ave NE, Ste 1740, Bellevue WA. |
| Pseudocode | No | The paper does not contain any structured pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code for the described methodology (e.g., repository link, explicit code release statement, or mention of code in supplementary materials). |
| Open Datasets | Yes | We choose two representative categories from Shape Net, chair and table, for experimental evaluations. ... Chang et al., 2015, Shapenet: An information-rich 3d model repository. |
| Dataset Splits | No | The paper uses the Shape Net dataset but does not explicitly provide specific training/validation/test dataset splits (e.g., percentages, sample counts, or explicit standard split citations). It refers to 'training shapes' and 'test shapes' but without specifying the partitioning methodology. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running its experiments (e.g., GPU/CPU models, memory amounts, or specific cloud/cluster configurations). |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library names with version numbers (e.g., 'Python 3.8, PyTorch 1.9'), needed to replicate the experiment. |
| Experiment Setup | No | The paper describes the optimization procedure and loss functions, mentioning the use of the Adam optimizer and training by one epoch for each component. However, it does not provide specific hyperparameter values such as learning rate, batch size, total number of epochs, or the specific values for the lambda parameters (λPD, λs, λreg, λl) mentioned in the loss formulations. |