PARSAC: Accelerating Robust Multi-Model Fitting with Parallel Sample Consensus
Authors: Florian Kluger, Bodo Rosenhahn
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate state-of-the-art performance on these as well as multiple established datasets, with inference times as small as five milliseconds per image. We achieve state-of-the-art results for vanishing point estimation, fundamental matrix estimation and homography estimation on multiple datasets. |
| Researcher Affiliation | Academia | Florian Kluger, Bodo Rosenhahn Leibniz University Hannover kluger@tnt.uni-hannover.de |
| Pseudocode | No | The paper describes the PARSAC method with numbered steps and subsections (e.g., 'Sample and Inlier Weight Prediction', 'Parallel Hypothesis Sampling and Selection'), which outline the process. However, these are descriptive text and do not constitute a formally labeled 'Pseudocode' or 'Algorithm' block or figure. |
| Open Source Code | Yes | Supplementary material, code and datasets are available at: https://github.com/fkluger/parsac |
| Open Datasets | Yes | For vanishing point estimation, we have seen an emergence of new large-scale datasets, such as SU3 (Zhou et al. 2019b) and NYU-VP (Kluger et al. 2020b), in recent years. For fundamental matrix and homography estimation, however, Adelaide RMF (Wong et al. 2011) is still the only publicly available dataset. We therefore present two new synthetic datasets: HOPE-F for fundamental matrix fitting, and Synthetic Metropolis Homographies for homography fitting. |
| Dataset Splits | No | The paper specifies training and test sets for the datasets, for example: 'Via this procedure, we generate a total of 4000 image pairs with key-point features, of which we reserve 400 as the test set.' And Table 1 includes 'scenes (train)' and 'scenes (test)'. However, it does not explicitly mention a distinct validation set or a 3-way train/validation/test split for the experiments. |
| Hardware Specification | No | The paper mentions computing times with and without GPU acceleration and that PARSAC requires a GPU for full potential. Tables 2, 3, and 4 include 'GPU' and 'CPU' columns for time measurements. However, no specific models of GPUs or CPUs, or any detailed hardware specifications (e.g., memory, specific processor names, or cloud instance types) are provided for the experimental setup. |
| Software Dependencies | No | The paper states: 'For sampling and inlier weight prediction, we implement a neural network based on (Kluger et al. 2020b).' It also refers to the supplementary material for 'implementation details'. However, the main text does not specify any particular software dependencies with version numbers (e.g., Python version, specific deep learning frameworks like PyTorch or TensorFlow with their versions, or other libraries). |
| Experiment Setup | No | The paper refers to supplementary material for 'implementation details, a description of all evaluation metrics, and additional experimental results and discussions'. However, the main text does not explicitly provide specific hyperparameters (e.g., learning rate, batch size, number of epochs, optimizer settings) or other system-level training configurations for the experiments. |