Semi-Supervised Learning for Optical Flow with Generative Adversarial Networks
Authors: Wei-Sheng Lai, Jia-Bin Huang, Ming-Hsuan Yang
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on benchmark datasets demonstrate that the proposed semi-supervised algorithm performs favorably against purely supervised and baseline semi-supervised learning schemes. |
| Researcher Affiliation | Collaboration | 1University of California, Merced 2Virginia Tech 3Nvidia Research |
| Pseudocode | No | The paper does not include any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The source code is publicly available on http://vllab.ucmerced.edu/wlai24/ semi Flow GAN. |
| Open Datasets | Yes | We use the Flying Chairs dataset [8] as the labeled dataset and the KITTI raw videos [10] as the unlabeled dataset. |
| Dataset Splits | No | The paper mentions training and test sets (e.g., 'The training and test sets contain 1041 and 552 image pairs, respectively.' for Sintel), but does not explicitly provide details about a separate validation set split. |
| Hardware Specification | No | The paper mentions the use of the Torch framework but does not provide any specific details about the hardware (e.g., GPU models, CPU types, or memory) used for running the experiments. |
| Software Dependencies | No | The paper states 'We implement the proposed method using the Torch framework [6]' and 'We use the Adam solver [19]', but it does not provide specific version numbers for the Torch framework or any other software libraries or dependencies used. |
| Experiment Setup | Yes | We use the Adam solver [19] to optimize both the generator and discriminator with β1 = 0.9, β2 = 0.999 and the weight decay of 1e 4. We set the initial learning rate as 1e 4 and then multiply by 0.5 every 100k iterations after the first 200k iterations. We train the network for a total of 600k iterations. In each mini-batch, we randomly sample 4 image pairs from each dataset. |