Mesh-Based Autoencoders for Localized Deformation Component Analysis

Authors: Qingyang Tan, Lin Gao, Yu-Kun Lai, Jie Yang, Shihong Xia

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that our method outperforms state-of-the-art methods in both qualitative and quantitative evaluations. and 6 Experimental Results Quantitative Evaluation We compare the generalization ability of our method with several state-of-the-art methods, including original SPLOCS (Neumann et al. 2013), SPLOCS with deformation gradients (Huang et al. 2014), SPLOCS with edge lengths and dihedral angles (Wang et al. 2016), SPLOCS with the feature from (Gao et al. 2017) as used in this paper, and (Bernard et al. 2016).
Researcher Affiliation Academia 1Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences 2School of Computer and Control Engineering, University of Chinese Academy of Sciences 3School of Computer Science & Informatics, Cardiff University
Pseudocode No No pseudocode or algorithm blocks found.
Open Source Code No No explicit statement or link for open-source code availability found.
Open Datasets Yes We use SCAPE (Anguelov et al. 2005) and Swing (Vlasic et al. 2008) datasets to conduct main quantitative evaluation. and Table 1: Errors of applying our method to generate unseen data from Horse (Sumner and Popovi c 2004), Face (Zhang et al. 2004), Jumping (Vlasic et al. 2008) and Humanoid datasets.
Dataset Splits No For the SCAPE dataset, we randomly choose 36 models as the training set and the remaining 35 models as the test set.
Hardware Specification No No specific hardware details (like CPU/GPU models or memory) are provided for the experimental setup. The only mention of hardware is 'NVIDIA hardware donation' which is not a specification of the hardware used for running experiments.
Software Dependencies No We use ADAM algorithm (Kingma and Ba 2015) and set the learning rate to be 0.001 to train the network.
Experiment Setup Yes We set λ1 = λ2 = 0.5 in all the experiments. The whole network pipeline is illustrated in Fig. 2. We use ADAM algorithm (Kingma and Ba 2015) and set the learning rate to be 0.001 to train the network. and For most datasets, we use dmin = 0.2 and dmax = 0.4.