Neuroexplicit Diffusion Models for Inpainting of Optical Flow Fields
Authors: Tom Fischer, Pascal Peter, Joachim Weickert, Eddy Ilg
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our model outperforms both fully explicit and fully data-driven baselines in terms of reconstruction quality, robustness and amount of required training data. Averaging the endpoint error across different mask densities, our method outperforms the explicit baselines by 11 27%, the GAN baseline by 47% and the Probabilisitic Diffusion baseline by 42%. With that, our method sets a new state of the art for inpainting of optical flow fields from random masks. |
| Researcher Affiliation | Academia | 1Computer Vision and Machine Perception Lab, Saarland University, Saarbr ucken, Germany 2Mathematical Image Analysis Group, Saarland University, Saarbr ucken, Germany. Correspondence to: Tom Fischer, Eddy Ilg <{fischer, ilg}@cs.uni-saarland.de>, Pascal Peter, Joachim Weickert <{peter, weickert}@mia.uni-saarland.de>. |
| Pseudocode | No | The paper describes mathematical formulations and discusses algorithms but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide a statement or link indicating the availability of its source code. |
| Open Datasets | Yes | We train on the final subset of the Flying Things dataset (Mayer et al., 2016) subset that removes overly large displacements and evaluate on the Sintel dataset (Butler et al., 2012). |
| Dataset Splits | No | The paper mentions training and evaluation on different datasets but does not explicitly describe a validation set or split used for model tuning or early stopping during training. |
| Hardware Specification | Yes | All methods are implemented in Py Torch and trained on an Nvidia A100 GPU. |
| Software Dependencies | No | The paper states "All methods are implemented in Py Torch" but does not specify the version number of PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We trained the neural networks for a total of 900,000 iterations using a batch size of 16. For the optimizer, we choose Adam (Kingma & Ba, 2015) with the default parameter configuration β1 = 0.9, β2 = 0.999. We use an initial learning rate of 0.0001 that is halved every 100,000 iterations after the first 300,000. |