Motion-blurred Video Interpolation and Extrapolation

Authors: Dawit Mureja Argaw, Junsik Kim, Francois Rameau, In So Kweon901-910

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The effectiveness and favorability of our approach are highlighted through extensive qualitative and quantitative evaluations on motionblurred datasets from high speed videos.
Researcher Affiliation Academia Dawit Mureja Argaw, Junsik Kim, Francois Rameau, In So Kweon KAIST Robotics and Computer Vision Lab., Daejeon, Korea dawitmureja@kaist.ac.kr, {mibastro, rameau.fr}@gmail.com, iskweon77@kaist.ac.kr
Pseudocode No The paper describes the proposed approach and its components using textual descriptions and mathematical equations, but it does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not include an unambiguous statement that the source code for the described methodology is released, nor does it provide a direct link to a code repository.
Open Datasets Yes To train our network for the task at hand, we take advantage of two publicly available high speed video datasets to generate motion-blurred images. The Go Pro high speed video dataset (Nah, Kim, and Lee 2017)... We also used the recently proposed Sony RX V high-frame rate video dataset (Jin, Hu, and Favaro 2019)...
Dataset Splits No The paper specifies training sets ('22 videos for training', '40 videos during training') and videos used for testing ('8 videos from each dataset'), but it does not provide explicit details about a separate validation dataset split (e.g., percentages or counts) for hyperparameter tuning.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory amounts used for running the experiments.
Software Dependencies No The paper states 'We implemented and trained our model in Py Torch (Paszke et al. 2019)' and mentions 'pretrained Flow Net 2 (Ilg et al. 2017)', but it does not provide specific version numbers for PyTorch or any other software dependencies, which are necessary for reproducible descriptions.
Experiment Setup Yes We used Adam (Kingma and Ba 2015) optimizer with parameters β1, β2 and weight decay fixed to 0.9, 0.999 and 4e 4, respectively. We trained our network using a mini-batch size of 4 image pairs by randomly cropping image patch sizes of 256 256. The loss weight coefficients are fixed to w6 = 0.32, w5 = 0.08, w4 = 0.04, w3 = 0.02, w2 = 0.01 and w1 = 0.005 from the lowest to the highest resolution, respectively, for both frames and flows. We trained our model for 120 epochs with initial learning rate fixed to λ = 1e 4 and gradually decayed by half at 60, 80 and 100 epochs. For the first 15 epochs, we only trained the optical flow estimator by setting α1 = 0 and α2 = 1 to facilitate feature decoding and flow estimation. For the rest of the epochs, we fixed α1 = 1 and α2 = 1.