3D Volumetric Modeling with Introspective Neural Networks

Authors: Wenlong Huang, Brian Lai, Weijian Xu, Zhuowen Tu8481-8488

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Compared to the existing 3D volumetric modeling approaches, 3DWINN demonstrates competitive results on several benchmarks in both the generation and the classification tasks. In addition to the standard inception score, the Fr echet Inception Distance (FID) metric is also adopted to measure the quality of 3D volumetric generations.
Researcher Affiliation Academia Wenlong Huang,*1 Brian Lai,*2 Weijian Xu,3 Zhuowen Tu3 1University of California, Berkeley 2University of California, Los Angeles 3University of California, San Diego
Pseudocode No The paper describes the method and training details in text and uses tables/figures for network architecture and results, but no structured pseudocode or algorithm blocks are provided.
Open Source Code No The source code of this project will be made publicly available. This is a promise for future availability, not concrete access at the time of publication.
Open Datasets Yes We evaluate our model in a widely used 3D CAD dataset Model Net introduced by (Wu et al. 2015). In this experiment, we use a common testbed Model Net10, which is a subset of Model Net consisting of 10 categories of 3D CAD data with 3,991 training examples and 908 test examples.
Dataset Splits Yes In this experiment, we use a common testbed Model Net10, which is a subset of Model Net consisting of 10 categories of 3D CAD data with 3,991 training examples and 908 test examples. ...Its performance is also comparable to many methods using other 3D representations, such as rendered multi-view images, which are often pre-trained on large-scale image dataset such as Image Net (Deng et al. 2009). However, it is worth noting that the test set of Model Net10 likely contains harder examples than those in the training set: both our baseline model and 3DWINN obtain significantly better results on the validation set, which we manually split from the given training set prior to training, and 3DWINN obtains a 50% error reduction on the validation set over the baseline model.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper mentions 'Adam optimizer' and 'Layer Normalization' and 'Leaky ReLU' but does not specify software dependencies with version numbers (e.g., PyTorch 1.9, Python 3.8).
Experiment Setup Yes The learning rate is 0.0001 with β1 = 0 and β2 = 0.9. As in (Lee et al. 2018), we keep the coefficient of the gradient penalty term λ as 10. We perform mini-batch training with size 128... We set the learning rate to be 0.005 with β1 = 0.8 and β2 = 0... The learning rate is 0.00002 with β1 = 0 and β2 = 0.9. In the synthesis step... The learning rate is 0.002 with β1 = 0 and β2 = 0.9. Both the classification and synthesis steps use mini-batch size of 32.