Context-Aware Iteration Policy Network for Efficient Optical Flow Estimation
Authors: Ri Cheng, Ruian He, Xuhao Jiang, Shili Zhou, Weimin Tan, Bo Yan
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that our method maintains performance while reducing FLOPs by about 40%/20% for the Sintel/KITTI datasets. |
| Researcher Affiliation | Academia | School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University rcheng22@m.fudan.edu.cn, rahe16@fudan.edu.cn, 20110240011@fudan.edu.cn, slzhou19@fudan.edu.cn, wmtan@fudan.edu.cn, byan@fudan.edu.cn |
| Pseudocode | No | The paper describes the method using equations and architectural diagrams (e.g., Figure 4) but does not include any formal pseudocode or algorithm blocks. |
| Open Source Code | No | The provided text does not contain an explicit statement about releasing open-source code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | We measure average Endpoint Error (EPE) and the percentage of optical flow outliers over all pixels (F1-all) for Sintel (Butler et al. 2012) and KITTI (Menze and Geiger 2015) datasets. C+T refers to results that are trained on Chairs (Dosovitskiy et al. 2015) and Things (Mayer et al. 2016) datasets. S/K(+H) refers to methods fine-tuned on Sintel (Butler et al. 2012), KITTI (Menze and Geiger 2015), and some on HD1K (Kondermann et al. 2016) datasets. |
| Dataset Splits | No | The paper mentions 'training datasets' and 'test datasets' (Sintel, KITTI, Chairs, Things, HD1K) but does not specify explicit training/validation/test splits (e.g., percentages, sample counts, or specific predefined validation sets). |
| Hardware Specification | Yes | Each method was evaluated on an NVIDIA Ge Force RTX 3090 GPU to measure the inference speed per sample. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries (e.g., 'PyTorch 1.9', 'Python 3.8'). |
| Experiment Setup | Yes | The weights λres and λincre in the overall loss (Equation 10) are set to 50 and 1. r is randomly sampled from 0.2 1.0. The learning rate is the same with their codes. |