Learning to Generate 3D Shapes with Generative Cellular Automata

Authors: Dongsu Zhang, Changwoon Choi, Jeonghwan Kim, Young Min Kim

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on probabilistic shape completion and shape generation demonstrate that our method achieves competitive performance against recent methods. We demonstrate the ability of GCA to generate high-fidelity shapes in two tasks: probabilistic shape completion (Sec. 4.1) and shape generation (Sec. 4.2).
Researcher Affiliation Academia Dongsu Zhang, Changwoon Choi, Jeonghwan Kim & Young Min Kim Department of Electrical and Computer Engineering, Seoul National University {96lives, zzzmaster, whitealex95, youngmin.kim}@snu.ac.kr
Pseudocode Yes Algorithm 1 Training GCA
Open Source Code No The paper provides a link to the official code of a baseline method (3D-IWGAN) in Section G.2: 'https://github.com/ Edward Smith1884/3D-IWGAN'. However, it does not contain any explicit statement about releasing the source code for the Generative Cellular Automata (GCA) methodology described in this paper, nor does it provide a link to its own repository.
Open Datasets Yes The probabilistic shape completion is tested with Part Net (Mo et al. (2019)) and Part Net-Scan (Wu et al. (2020)) dataset, where objects in Shape Net(Chang et al. (2015)) are annotated with instance-level parts.
Dataset Splits No The paper mentions 'test set' and 'training set' in the context of evaluation, such as 'For each partial shape in the test set, we generate ten completion results...' and 'The scores assigned to training set can be regarded as an optimal value for each metric.' However, it does not explicitly provide specific details about dataset splits (e.g., percentages or exact counts for training, validation, and testing), nor does it mention a dedicated validation set.
Hardware Specification Yes All experiments are run on RTX 2080 ti 11GB GPU, except for the analysis on the neighborhood size with r = 10, which ran on Titan RTX 24GB GPU.
Software Dependencies No The paper mentions key software components and libraries such as 'U-Net', 'Minkowski Engine (Choy et al. (2019))', and 'Adam (Kingma & Ba (2015)) optimizer'. However, it does not provide specific version numbers for these software dependencies or any other programming languages or frameworks used, which are necessary for reproducible descriptions.
Experiment Setup Yes We use neighborhood radius r = 3, T = 70 with infusion speed w = 0.005 for all datasets. ... We use neighborhood size r = 2 with L1 distance metric, T = 100 inferences, infusion speed w = 0.005 for airplane and car dataset, and r = 3 for chair category... We train our method using the Adam (Kingma & Ba (2015)) optimizer with an initial learning rate of 5e-4 and batch size of 32. The learning rate decays by 0.5 every 100k steps.