Deep Cascade Generation on Point Sets

Authors: Kaiqi Wang, Ke Chen, Kui Jia

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comparative evaluation on the publicly benchmarking Shape Net dataset demonstrates superior performance of the proposed model to the state-of-the-art methods on both single-view shape reconstruction and shape autoencoding applications.
Researcher Affiliation Academia Kaiqi Wang , Ke Chen and Kui Jia South China University of Technology mswkq@mail.scut.edu.cn, {chenk, kuijia}@scut.edu.cn
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes Source codes of our DCG method are available1. 1https://wkqscut.github.io/DCGNet/.
Open Datasets Yes We conduct experiments on the popular Shape Net Core dataset (v2) [Chang et al., 2015], which has been widely adopted in 3D shape reconstruction [Choy et al., 2016; Fan et al., 2017; Groueix et al., 2018] and autoencoding [Yang et al., 2018].
Dataset Splits No We follow the settings in [Choy et al., 2016; Groueix et al., 2018], i.e., 31746 models for training and the remaining 7943 for testing. No explicit mention of a validation split was found.
Hardware Specification No No specific hardware details (like GPU/CPU models or memory amounts) used for running the experiments were provided.
Software Dependencies No The paper mentions software components like ResNet-18, PointNet, and the ADAM optimizer, but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes We used the ADAM to train the model for a total of 420 epochs with an initial learning rate of 0.001 and batch size 32. For step decay on the learning rate, it is dropped by a factor of 0.1 after 300 and 400 epochs.