Deep Functional Dictionaries: Learning Consistent Semantic Structures on 3D Models from Functions
Authors: Minhyuk Sung, Hao Su, Ronald Yu, Leonidas J. Guibas
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of our technique in various segmentation and keypoint selection applications. In experiments, we test our model with existing neural network architectures, and demonstrate the performance on labeled/unlabeled segmentation and keypoint correspondence problem in various datasets. |
| Researcher Affiliation | Academia | Minhyuk Sung Stanford University mhsung@cs.stanford.edu Hao Su University of California San Diego haosu@eng.ucsd.edu Ronald Yu University of California San Diego ronaldyu@ucsd.edu Leonidas Guibas Stanford University guibas@cs.stanford.edu |
| Pseudocode | Yes | 1: function SINGLE STEP GRADIENT ITERATION(X, f, Θt, η) 2: Compute: At = A(X; Θt). 3: Solve: xt = arg minx Atx f 2 2 s.t. C(x). 4: Update: Θt+1 = Θt η L(A(X; Θt); f, xt). 5: end function Algorithm 1: Single-Step Gradient Iteration. |
| Open Source Code | Yes | Code for all experiments below is available in https://github.com/mhsung/deep-functional-dictionaries. |
| Open Datasets | Yes | Yi et al. [41] provide keypoint annotations on 6,243 chair models in Shape Net [7]. Stanford 3D Indoor Semantic Dataset (S3DIS) [2] is a collection of real scan data of indoor scenes with annotations of instance segments and their semantic labels. Here, we test with 100 non-rigid human body shapes in MPI-FAUST dataset [4]. |
| Dataset Splits | Yes | We follow their experiment setup by using the same split of training/validation/test sets and the same 2k sampled point cloud as inputs. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using Point Net [32] architecture but does not specify software versions for libraries, frameworks, or operating systems. |
| Experiment Setup | Yes | In the experiment, we use a 80-20 random split for training/test sets 1, train the network with 2k point clouds as provided by [41], and set k = 10 and γ = 0.0. Table 1 shows the results of our method when using k = 10 and γ = 1.0. Thus, we use k = 150 and γ = 1.0. σ is Gaussian-weighting parameter (0.001 in our experiment). |