FPNN: Field Probing Neural Networks for 3D Data
Authors: Yangyan Li, Soeren Pirk, Hao Su, Charles R. Qi, Leonidas J. Guibas
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our field probing based neural networks (FPNN) on a classification task on Model Net [31] dataset, and show that they match the performance of 3DCNNs while requiring much less computation, as they are designed and trained to respect the sparsity of 3D data. (...) 4 Results and Discussions |
| Researcher Affiliation | Academia | Yangyan Li1,2 Sören Pirk1 Hao Su1 Charles R. Qi1 Leonidas J. Guibas1 (...) 1Stanford University, USA 2Shandong University, China |
| Pseudocode | No | The paper describes the architecture and components of the Field Probing Neural Network, but it does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | We open-source our code at https://github.com/yangyanli/FPNN for encouraging future developments. |
| Open Datasets | Yes | We use Model Net40 [31] (12,311 models from 40 categories, training/testing split with 9,843/2,468 models4) the standard benchmark for 3D object classification task, in our experiments. |
| Dataset Splits | No | The paper states 'training/testing split with 9,843/2,468 models' for ModelNet40, but it does not explicitly provide details for a separate validation split. |
| Hardware Specification | Yes | Figure 6: Running time of convolutional layers (same settings as that in [31]) and field probing layers (C N T = 1024 8 4) on Nvidia GTX TITAN with batch size 83. |
| Software Dependencies | No | The paper states 'We implemented our field probing layers in Caffe [12].' but does not specify a version number for Caffe or any other software dependencies with their versions. |
| Experiment Setup | Yes | We train our FPNN 80, 000 iterations on 64 64 64 distance field with batch size 1024.5, with SGD solver, learning rate 0.01, momentum 0.9, and weight decay 0.0005. (...) The σ hyper-parameter in Gaussian layer controls how sharp is the transform. We select its value empirically in our experiments, and the best performance is given when we use σ 10% of the object size. |