Optical Flow Estimation from a Single Motion-blurred Image

Authors: Dawit Mureja Argaw, Junsik Kim, Francois Rameau, Jae Won Cho, In So Kweon891-900

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We qualitatively and quantitatively evaluate our model through a large set of experiments on synthetic and real motion-blur datasets. We also provide in-depth analysis of our model in connection with related approaches to highlight the effectiveness and favorability of our approach. Furthermore, we showcase the applicability of the flow estimated by our method on deblurring and moving object segmentation tasks.
Researcher Affiliation Academia Dawit Mureja Argaw, Junsik Kim, Francois Rameau, Jae Won Cho, In So Kweon KAIST Robotics and Computer Vision Lab., Daejeon, Korea dawitmureja@kaist.ac.kr, {mibastro, rameau.fr}@gmail.com, {chojw, iskweon77}@kaist.ac.kr
Pseudocode No The paper describes the model architecture and training process in text and with equations, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link for open-source code for the methodology described.
Open Datasets Yes We take advantage of the Monkaa dataset proposed in (N.Mayer et al. 2016) to generate a synthetic image motion-blur dataset for optical flow estimation. To generate real scene motion-blur images for network training, we use high speed video datasets: Go Pro (Nah, Kim, and Lee 2017) and Nf S (Galoogahi et al. 2017).
Dataset Splits Yes We generate a Monkaa blur dataset with 10,000 training and 1200 test images (see Fig. 1). Out of 100 videos in the dataset, 70 are used for training and the remaining videos are used for validation and testing.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions optimization methods and network architectures but does not specify software dependencies (e.g., library names with version numbers like Python 3.8, PyTorch 1.9, or specific CUDA versions).
Experiment Setup Yes We estimate flows at 6 different feature levels with training loss weight coefficients set as follows: w6 = 0.32, w5 = 0.08, w4 = 0.04, w3 = 0.02, w2 = 0.01 and w1 = 0.005 from the lowest to the highest resolution, respectively. At each level, we use a correlation layer with a neighborhood search range of 4 pixels and stride size of 1. We chose Adam (Kingma and Ba 2015) as an optimization method with parameters β1, β2 and weight decay fixed to 0.9, 0.999 and 4e 4, respectively. In all experiments, a mini-batch size of 4 and image size of 256 256 is used by centrally cropping inputs. Following (Fischer et al. 2015), we train the Monkaa blur dataset for 300 epochs with initial learning rate λ = 1e 4. We gradually decayed the learning rate by half at 100, 150, 200 and 250 epochs during training. For the Go Pro and Nf S blur datasets, we trained (fine-tuned) the model for 120 epochs with a learning rate initialized with λ = 1e 4 and decayed by half at 60, 80 and 100 epochs.