Dynamic Filter Networks

Authors: Xu Jia, Bert De Brabandere, Tinne Tuytelaars, Luc V. Gool

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of the dynamic filter network on the tasks of video and stereo prediction, and reach state-of-the-art performance on the moving MNIST dataset with a much smaller model. and 4 Experiments The Dynamic Filter Network can be used in different ways in a wide variety of applications. In this section we show its application in learning steerable filters, video prediction and stereo prediction. All code to reproduce the experiments is available at https://github.com/ dbbert/dfn.
Researcher Affiliation Academia Bert De Brabandere1 ESAT-PSI, KU Leuven, i Minds Xu Jia1 ESAT-PSI, KU Leuven, i Minds Tinne Tuytelaars1 ESAT-PSI, KU Leuven, i Minds Luc Van Gool1,2 ESAT-PSI, KU Leuven, i Minds D-ITET, ETH Zurich 1firstname.lastname@esat.kuleuven.be 2vangool@vision.ee.ethz.ch
Pseudocode No The paper does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes All code to reproduce the experiments is available at https://github.com/ dbbert/dfn.
Open Datasets Yes Moving MNIST We first evaluate the method on the synthetic moving MNIST dataset [19]. Given a sequence of 10 frames with two moving digits as input, the goal is to predict the following 10 frames. We use the code provided by [19] to generate training samples on-the-fly, and use the provided test set for comparison.
Dataset Splits No The paper mentions training and test sets but does not explicitly describe a separate validation set or a three-way split for any of the datasets used.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper does not provide specific software dependency details with version numbers, such as programming languages, libraries, or frameworks used for implementation.
Experiment Setup No The paper mentions the loss functions used (binary cross-entropy, Euclidean loss) and the filter size (9x9), but defers comprehensive details on hyperparameters (e.g., learning rate, batch size, epochs) to the external code: 'Details on the hyper-parameter can be found in the available code.'