SKFlow: Learning Optical Flow with Super Kernels
Authors: SHANGKUN SUN, Yuanqi Chen, Yu Zhu, Guodong Guo, Ge Li
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the effectiveness of SKFlow on multiple benchmarks, especially in the occluded areas. |
| Researcher Affiliation | Collaboration | 1School of Electronic and Computer Engineering, Peking University 2Institute of Deep Learning, Baidu Research 3West Virginia University |
| Pseudocode | No | The paper does not contain any figure, block, or section explicitly labeled 'Pseudocode' or 'Algorithm'. |
| Open Source Code | Yes | The code is available at https://github.com/littlespray/SKFlow. |
| Open Datasets | Yes | Our model is evaluated on Sintel [3] and KITTI [22] datasets. We first pre-train our SKFlow using the Flying Chairs Flying Things schedule, and then fine-tune for Sintel with the combined dataset from Sintel, Flying Things, KITTI and HD1K [18]. |
| Dataset Splits | No | The paper mentions using standard Sintel and KITTI training and test sets, and fine-tuning with a combined dataset. However, it does not explicitly provide specific percentages, absolute counts, or detailed methodology for training/validation/test splits within these datasets or for their combined use, beyond referencing standard benchmark sets. |
| Hardware Specification | Yes | Our SKFlow is built with PyTorch [23] library and trained using two Tesla V100 GPUs. |
| Software Dependencies | No | The paper mentions 'PyTorch [23] library' but does not specify a version number for PyTorch or any other software dependencies. It only lists software names without specific versions. |
| Experiment Setup | Yes | Following previous works [36, 16, 42, 20], we first pre-train our SKFlow using the Flying Chairs Flying Things schedule, and then fine-tune for Sintel with the combined dataset from Sintel, Flying Things, KITTI and HD1K [18]. Finally, we finetune our model on KITTI. We adopt the Adam W [19] optimizer and one-cycle policy [28]. |