PredCNN: Predictive Learning with Cascade Convolutions

Authors: Ziru Xu, Yunbo Wang, Mingsheng Long, Jianmin Wang

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our model on the standard Moving MNIST dataset and two challenging crowd flow datasets, and show that Pred CNN outperforms the state-of-the-art recurrent models for video prediction on the standard Moving MNIST dataset and two challenging crowd flow prediction datasets, and achieves a faster training speed and lower memory footprint.
Researcher Affiliation Academia Ziru Xu , Yunbo Wang , Mingsheng Long , and Jianmin Wang KLiss MOE, School of Software, Tsinghua University, China National Engineering Laboratory for Big Data Software Beijing Key Laboratory for Industrial Big Data System and Application {xzr16,wangyb15}@mails.tsinghua.edu.cn, {mingsheng,jimwang}@tsinghua.edu.cn
Pseudocode No The paper does not contain any sections or figures explicitly labeled as "Pseudocode" or "Algorithm".
Open Source Code Yes Datasets and codes will be released at https://github.com/thuml.
Open Datasets Yes Taxi BJ and Bike NYC [Zhang et al., 2017] are two crowd flow prediction datasets, collected from GPS trajectory monitors in Beijing and New York respectively. ... Besides, we also apply our method to a commonly used video prediction dataset, Moving MNIST...
Dataset Splits No The paper specifies train and test splits for the datasets (e.g., "training set of 19, 788 sequences and a test set of 1, 344 sequences" for Taxi BJ), but it does not explicitly mention a separate "validation" dataset split.
Hardware Specification No The paper does not specify any particular hardware components such as GPU or CPU models used for running the experiments. It only mentions training time and memory usage.
Software Dependencies No All experiments are implemented in Keras [Chollet and others, 2015] with Tensor Flow [Abadi et al., 2016] as back-ends. The paper mentions software used but does not provide specific version numbers for Keras or TensorFlow.
Experiment Setup Yes Unless otherwise specified, the starting learning rate of Adam is set to 10 4, and the training process is stopped after 100 epochs with a batch size of 16. ... We use Adam optimizer with a starting learning rate of 10 4, 8 video sequences per batch and the training process stopped after approximately 200, 000 iterations.