Diversified Dynamical Gaussian Process Latent Variable Model for Video Repair

Authors: Hao Xiong, Tongliang Liu, Dacheng Tao

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, experimental testing illustrates the robustness and effectiveness of our method for damaged video repair. In this section, we conduct experiments on movie clips from the Hollywood dataset 1. Since D2GPLVM requires a certain number of frames for training, the clips used for testing were at least seven seconds in length. For each sequence, 40 percent of the frames were randomly selected to generate artificial damage.
Researcher Affiliation Academia Hao Xiong, Tongliang Liu and Dacheng Tao Centre for Quantum Computation and Intelligent Systems, Faculty of Engineering and Information Technology, University of Technology Sydney hao.xiong@student.uts.edu.au, tliang.liu@gmail.com, dacheng.tao@uts.edu.au
Pseudocode No The paper describes mathematical formulations and a model, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not explicitly state that the code for the described methodology is open-source or provide a link to a code repository.
Open Datasets Yes In this section, we conduct experiments on movie clips from the Hollywood dataset 1. 1http://www.di.ens.fr/ laptev/actions/hollywood2/
Dataset Splits No The paper states 'For each sequence, 40 percent of the frames were randomly selected to generate artificial damage' which refers to creating damaged test data, but it does not specify a standard training, validation, and test split for the dataset used to train the D2GPLVM model itself.
Hardware Specification Yes The code was run on Matlab 2014a on a computer configured with a 3.2GHz CPU and 8GB memory.
Software Dependencies Yes The code was run on Matlab 2014a on a computer configured with a 3.2GHz CPU and 8GB memory.
Experiment Setup Yes Note that only a single parameter λ in our model controls the diversity of the optimized inducing points, which is manually set as 0.01 for all 100 videos here.