Unsupervised Deep Learning for Optical Flow Estimation

Authors: Zhe Ren, Junchi Yan, Bingbing Ni, Bin Liu, Xiaokang Yang, Hongyuan Zha

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method on three modern datasets: MPI-Sintel dataset (Butler et al. 2012), KITTI dataset (Geiger, Lenz, and Urtasun 2012), Flying Chairs The dataset (Fischer et al. 2015)... The evaluation results regarding with the endpoint error (EPE) on the training and testing sets are reported in Table 2.
Researcher Affiliation Collaboration Zhe Ren,1 Junchi Yan,2,3 Bingbing Ni,1 Bin Liu,4 Xiaokang Yang,1 Hongyuan Zha5 1Shanghai Jiao Tong University 2East China Normal University 3IBM Research 4Moshanghua Tech 5Georgia Tech
Pseudocode No The paper describes the network and training process but does not include any structured pseudocode or algorithm blocks.
Open Source Code No Finally, to enable comparison and further innovation, we will provide a public Caffe (Jia et al. 2014) implementation of our method after the release of this paper.
Open Datasets Yes We evaluate our method on three modern datasets: MPI-Sintel dataset (Butler et al. 2012) is obtained from an animated movie... KITTI dataset (Geiger, Lenz, and Urtasun 2012) contains photos shot in city streets... Flying Chairs The dataset (Fischer et al. 2015) is a recently released synthetic benchmark...
Dataset Splits Yes In our experiment, we make the multi-view extended versions (without ground truth) of the two datasets together as the training dataset with 13372 image pairs, and use pairs with ground truth as our validation set with 194 pairs for KITTI2012 and 200 for KITTI2015 respectively. As the same with (Fischer et al. 2015) we split the dataset into 22232 samples (i.e. image pairs) for training and 640 samples for testing, respectively.
Hardware Specification No The paper mentions 'CPU' and 'GPU' in Table 2 but does not provide specific hardware models, processors, or memory details used for experiments.
Software Dependencies No The paper mentions 'Caffe (Jia et al. 2014)' but does not provide a specific version number for Caffe or other software dependencies.
Experiment Setup Yes For loss function, we set α = 2 in Eq.3 and γ = 1 in Eq.1. In line with (Fischer et al. 2015), we adopt the Adam method and set its parameters β1 = 0.9 and β2 = 0.999. The start learning rate λ is set by 1e 4 which decreases half for every 6000 iterations after first 30000 iterations. The batch size is set to 64. For finetune, we start with a learning rate 1e 5.