Learning to be Smooth: An End-to-End Differentiable Particle Smoother

Authors: Ali Younis, Erik Sudderth

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Thorough experiments in Sec. 5 then highlight the advantages of our MDPS over differential PFs on a synthetic bearings-only tracking task, and also show substantial advantages over search-based and retrieval-based baselines for challenging real-world, city-scale global localization problems.
Researcher Affiliation Academia Ali Younis and Erik B. Sudderth Department of Computer Science, University of California, Irvine, CA 92617 ayounis@uci.edu, sudderth@uci.edu
Pseudocode Yes Figure 12: The Mixture Density Particle Filter; Figure 13: The Mixture Density Particle Smoother
Open Source Code No Code will be released on Git Hub if the paper is accepted after some code cleanup.
Open Datasets Yes We use the Mapillary Geo-Localization [11] and KITTI [51] datasets to compare our MDPS method to MDPF [14] as well as other methods specifically designed for the global localization task, which are not sequential Monte Carlo methods.
Dataset Splits Yes We generate custom training, validation, and test splits to create longer sequences with T = 100 steps.
Hardware Specification Yes Experiment GPU GPU Runtime TG-PF (Multinomial) 1x NVIDIA RTX 3090 25 hrs; Experiment GPU GPU Runtime Retrieval 1x NVIDIA A6000 12 hrs
Software Dependencies No The paper mentions the use of the Adam [49] optimizer but does not specify version numbers for it or any other key software components, libraries, or programming languages.
Experiment Setup Yes For all particle filter methods (including ones internal to MDPS) we use 250 particle during training and evaluation and initialize the filters using 1000 particles. For PF and smoother methods, we initialize the particle set as the true state with Gaussian noise (σ = 50 meters) on the x-y components of the state. Initial learning rates are varied throughout the training stages ranging from 0.01 to 0.000001.