Video Recovery via Learning Variation and Consistency of Images

Authors: Zhouyuan Huo, Shangqian Gao, Weidong Cai, Heng Huang

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed method via several video recovery tasks and experiment results show that our new method consistently outperforms other related approaches. In this section, we evaluate our proposed video recovery model and compare it with other five related methods: Tucker algorithm (Tucker) (Eld en 2007), low-rank tensor completion (LRTC) (Liu et al. 2009), low-rank tensor completion with considering consistency (ℓ2Tensor) (Wang, Nie, and Huang 2014), low-rank tensor completion with considering consistency and variation (cap Tensor), sectional trace norm with considering consistency (kmsv-ℓ2), and our sectional trace norm with considering both consistency and variation. In the experiments, error bound γ = 0.05 |X|/(n m s). Relative square error (RSE) in (Wang, Nie, and Huang 2014) is used as performance metric criterion for comparison.
Researcher Affiliation Academia Zhouyuan Huo,1 Shangqian Gao,2 Weidong Cai,3 Heng Huang1 1Department of Computer Science and Engineering, University of Texas at Arlington, USA 2College of Engineering, Northeastern University, USA 3School of Information Technologies, University of Sydney, NSW 2006, Australia zhouyuan.huo@mavs.uta.edu, gao.sh@husky.neu.edu, tom.cai@sydney.edu.au, heng@uta.edu
Pseudocode Yes Algorithm 1 Algorithm to solve problem (3).
Open Source Code No The paper does not provide concrete access to source code for the methodology described, nor does it explicitly state that code will be released.
Open Datasets Yes UCF11 Dataset: It contains 11 action categories: basketball shooting, biking, diving, golf swinging and so on. This dataset is very challenging due to large variations in camera motion, object appearance, pose and so on (Liu, Luo, and Shah 2009) . YUV Video Sequences Dataset: It includes video sequences of commonly used video test sequences in the 4:2:0 YUV format, e.g. Elephant Dream video, Highway, News and Stephan1. 1http://trace.eas.asu.edu/yuv/index.html Hollywood Human Actions Dataset: It contains video clips, i.e. short sequences from 32 movies: American Beauty, As Good As It Gets, Being John Malkovich, Big Fish and so on(Laptev et al. 2008).
Dataset Splits No The paper discusses parameter tuning via grid search (e.g., 'In this experiment, we tune the parameters through grid search strategy, r = {5, 10, 15, 20, 25, 30} and Nε = {1, 10, 102, 103, 104}'), but does not explicitly provide training/validation/test dataset splits with percentages, sample counts, or formal citation to predefined splits for reproduction.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running the experiments were provided.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup Yes In the experiments, error bound γ = 0.05 |X|/(n m s). In this experiment, we tune the parameters through grid search strategy, r = {5, 10, 15, 20, 25, 30} and Nε = {1, 10, 102, 103, 104}, and plot the results in Figure 3. As for parameter λ, in the experiment, we search an optimal value from {10 2, 10 1, 1, 10, 102}.