Neural Scene Flow Prior

Authors: Xueqian Li, Jhony Kaesemodel Pontes, Simon Lucey

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments We evaluated the performance (accuracy, generalizability, and computational cost) of our neural prior for scene flow on synthetic and real-world datasets. We performed experiments on different neural network settings and analyzed the performance of the neural prior to regularizing scene flow. Remarkably, we show that a simple MLP-based prior to regularize scene flow is enough to achieve competitive results to the state-of-the-art scene flow methods.
Researcher Affiliation Collaboration Xueqian Li 1,2 Jhony Kaesemodel Pontes1 Simon Lucey2 1Argo AI 2The University of Adelaide
Pseudocode No The paper does not include pseudocode or clearly labeled algorithm blocks.
Open Source Code No We cite all the data we used in the experiment section. And we will release code in personal Git Hub repository.
Open Datasets Yes Datasets We used four scene flow datasets: 1) Flying Things3D [33] which is an extensive collection of randomly moving synthetic objects. We used the preprocessed data from [30]; 2) KITTI [34,35] which has real-world self-driving scenes. We used the subset released by [30]; 3) Argoverse [8] and 4) nu Scenes [7] are two large-scale autonomous driving datasets with challenging dynamic scenes. However, there are no official scene flow annotations. We followed the data processing method in [45] to collect pseudo-ground-truth scene flow.
Dataset Splits Yes Flying Things3D [33] Train: 19,967 samples, Test: 2,000 samples nu Scenes Scene Flow [7] Train: 1,513 samples, Test: 310 samples KITTI Scene Flow [34,35] Train: 100 samples, Test: 50 samples Argoverse Scene Flow [8] Train: 2,691 samples, Test: 212 samples
Hardware Specification Yes All experiments were run on a machine with an NVIDIA Quadro P5000 GPU and a 16 Intel(R) Xeon(R) W-2145 CPU @ 3.70GHz.
Software Dependencies No The paper mentions 'Py Torch [41]' and 'Adam [24]' but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes Implementation details We defined our neural prior for scene flow as a simple coordinate-based MLP architecture with 8 hidden layers, a fixed length of 128 for the hidden units, Rectified Linear Unit (Re LU) activation and shared weights across points. The network input is the 3D point cloud Pt-1, and the output is the scene flow F. We used Py Torch [41] for the implementation and optimized the objective function with Adam [24]. The weights were randomly initialized. We set a fixed learning rate of 8e 3 and run the optimization for 5k iterations with early stopping on the loss.