Neural Star Domain as Primitive Representation

Authors: Yuki Kawana, Yusuke Mukuta, Tatsuya Harada

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the reconstruction performance of an NSD compared with state-of-the-art methods for an input RGB image. The quantitative results are shown in Table 2. In the experiments, we use the Shape Net [32] dataset.
Researcher Affiliation Academia Yuki Kawana1, Yusuke Mukuta1,2, Tatsuya Harada1,2 1The University of Tokyo, 2RIKEN AIP
Pseudocode No The paper describes its approach and architecture through text and diagrams (Figure 2) but does not include any formal pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about open-sourcing its code for the described methodology, nor does it include a link to a code repository.
Open Datasets Yes In the experiments, we use the Shape Net [32] dataset.
Dataset Splits Yes In addition, we use the same samples and data split as in [25]. The threshold τo of the composite indicator function is determined by a grid search over the validation set.
Hardware Specification Yes All speed measurements are performed on an NVIDIA V100 GPU.
Software Dependencies No The paper mentions using ResNet18 and Adam optimizer, but it does not provide specific version numbers for these or other software dependencies, such as PyTorch.
Experiment Setup Yes N is set to 30 by default, unless stated otherwise. We use Res Net18 as the encoder E... For the translation network T, we use a multilayer perceptron (MLP) with three hidden layers with (128, 128, N 3) units with Re LU activation. For an NSD, we use an MLP with three hidden layers with (64, 64, 1) units and Re LU activation. We set the margin α of the indicator function to 100. ... During training, we use a batch size of 20, and train with the Adam optimizer, with a learning rate of 0.0001. We set the weight of Lo and Ls as 1 and 10, respectively.