GMSF: Global Matching Scene Flow

Authors: Yushan Zhang, Johan Edstedt, Bastian Wandt, Per-Erik Forssen, Maria Magnusson, Michael Felsberg

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that the proposed Global Matching Scene Flow (GMSF) sets a new state-of-the-art on multiple scene flow estimation benchmarks.
Researcher Affiliation Academia Yushan Zhang Johan Edstedt Bastian Wandt Per-Erik Forssén Maria Magnusson Michael Felsberg Linköping University {firstname.lastname}@liu.se
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes The code is available at https://github.com/ZhangYushan3/GMSF.
Open Datasets Yes Flying Things3D [30] is a synthetic dataset of objects generated by Shape Net [2] with randomized movement rendered in a scene., KITTI Scene Flow [31] is a real world dataset for autonomous driving., Waymo-Open Dataset [44] is a large-scale autonomous driving dataset.
Dataset Splits Yes The numbers of points N1 and N2 are both set to 8192 during training and testing, randomly sampled from the full set. F3Do consists of 20000 and 2000 stereo scenes for training and testing, respectively. Waymo-Open dataset. The dataset contains 798 training and 202 validation sequences.
Hardware Specification Yes Runtime(ms) during testing on an NVIDIA A40 GPU
Software Dependencies No The paper mentions "implemented in Py Torch" but does not provide specific version numbers for PyTorch or any other software dependencies, which are required for full reproducibility.
Experiment Setup Yes We use the Adam W optimizer with a learning rate of 2 10 4, a weight decay of 10 4, and One Cycle LR as the scheduler to anneal the learning rate. The training is done for 600k iterations with a batch size of 8.