Spatiotemporal Super-Resolution with Cross-Task Consistency and Its Semi-supervised Extension
Authors: Han-Yi Lin, Pi-Cheng Hsiu, Tei-Wei Kuo, Yen-Yu Lin
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We train and evaluate our model on a single NVIDIA Ge Force GTX 1080Ti graphics card with 11GB memory. ... We conduct ablation studies for analyzing our method. Our method can work with existing spatial or temporal SR modules. ... Table 1 reports the individual and the average qualities of spatiotemporal SR from the two network streams, FS T and FT S, on four datasets. |
| Researcher Affiliation | Academia | Han-Yi Lin1 , Pi-Cheng Hsiu2,3 , Tei-Wei Kuo1,4 and Yen-Yu Lin2,5 1Department of Computer Science and Information Engineering, National Taiwan University, Taiwan 2Research Center for Information Technology Innovation, Academia Sinica, Taiwan 3Department of Computer Science and Information Engineering, National Chi Nan University, Taiwan 4Department of Computer Science and College of Engineering, City University of Hong Kong, Hong Kong 5Department of Computer Science, National Chiao Tung University, Taiwan |
| Pseudocode | No | No pseudocode or algorithm block is present in the paper. |
| Open Source Code | Yes | The source code of this work is available at https://hankweb.github.io/STSRwith Cross Task/. |
| Open Datasets | Yes | We train the proposed method for spatiotemporal SR by using the training set of the Vimeo-90k dataset [Xue et al., 2019], which is recently built for evaluating the performance of video processing tasks, such as video frame interpolation and super-resolution. |
| Dataset Splits | No | The Vimeo-90k dataset contains 51, 313 samples for training and 3, 782 samples for testing. |
| Hardware Specification | Yes | We train and evaluate our model on a single NVIDIA Ge Force GTX 1080Ti graphics card with 11GB memory. |
| Software Dependencies | No | In this work, we use VDSR [Kim et al., 2016], ESPCN [Shi et al., 2016], and DBPN [Haris et al., 2018] as the spatial SR module, while adopt DVF [Liu et al., 2017], Super Slomo [Jiang et al., 2018], and DAIN [Bao et al., 2019] as the temporal SR module. Since DBPN and DAIN are released in Pytorch, our implementation regarding DBPN and DAIN is realized by Pytorch, while the rest are developed with Tensorflow. |
| Experiment Setup | Yes | We set the batch size, learning rate, momentum, and weight decay to 2, 10 3, 0.9, and 5 10 4, respectively. |