Cycle-SUM: Cycle-Consistent Adversarial LSTM Networks for Unsupervised Video Summarization

Authors: Li Yuan, Francis EH Tay, Ping Li, Li Zhou, Jiashi Feng9143-9150

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on two video summarization benchmark datasets validate the state-of-the-art performance and superiority of the Cycle-SUM model over previous baselines.
Researcher Affiliation Academia 1National University of Singapore 2Hangzhou Dianzi University {ylustcnus,zhouli2025}@gmail.com, {mpetayeh,elefjia}@nus.edu.sg, lpcs@hdu.edu.cn
Pseudocode Yes Algorithm 1 Training Cycle-SUM model
Open Source Code No The paper does not contain an unambiguous statement or direct link indicating that the source code for the methodology described is publicly available.
Open Datasets Yes We evaluate Cycle-SUM on two benchmark datasets: Sum Me (Gygli et al. 2014) and TVSum (Song et al. 2015).
Dataset Splits No All experiments are conducted for five times on five random splits and we report the average performance.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions using a 'Goog Le Net network', 'LSTM', 'VAE', 'GAN', and 'WGAN' architectures, but does not provide specific version numbers for any software libraries, frameworks, or programming languages used.
Experiment Setup Yes The selector is a Bi-LSTM network consisting of three layers, each with 300 hidden units. The two generators are implemented by variational auto-encoder LSTM... The two discriminators are LSTM networks... The clipping parameter c in this training falls in [ 0.5, 0.5]. The typical value for the generator iteration per discriminator iteration n is 2 5.