Bayesian Prediction of Future Street Scenes using Synthetic Likelihoods

Authors: Apratim Bhattacharyya, Mario Fritz, Bernt Schiele

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that our approach achieves accurate state-of-the-art predictions and calibrated probabilities through extensive experiments for scene anticipation on Cityscapes dataset. Moreover, we show that our approach generalizes across diverse tasks such as digit generation and precipitation forecasting.
Researcher Affiliation Academia Apratim Bhattacharyya, Mario Fritz, Bernt Schiele Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbr ucken, Germany {abhattac, mfritz, schiele}@mpi-inf.mpg.de
Pseudocode No The paper describes its methods and architectures using text and diagrams, but it does not include any explicit pseudocode blocks or algorithms.
Open Source Code No The paper does not contain any statement about releasing open-source code or provide a link to a code repository.
Open Datasets Yes We show that our approach achieves accurate state-of-the-art predictions and calibrated probabilities through extensive experiments for scene anticipation on Cityscapes dataset. Moreover, we show that our approach generalizes across diverse tasks such as digit generation and precipitation forecasting.
Dataset Splits Yes We always use the annotated 20th frame of the validation sequences for evaluation using the standard mean Intersection-over-Union (m Io U) and the per-pixel (negative) conditional log-likelihood (CLL) metrics.
Hardware Specification No The paper describes the model architecture and training process but does not specify any particular hardware (e.g., GPU models, CPU types) used for the experiments.
Software Dependencies No The paper mentions using Adam optimizer and PSPNet for segmentation but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We use a fully connected generator with 6000-4000-2000 hidden units with 50% dropout probability. The discriminator has 1000-1000 hidden units with leaky Re LU non-linearities. We set β = 10 4 for the first 4 epochs and then reduce it to 0, to provide stability during the initial epochs. We train all models using Adam (Kingma & Ba, 2015) for 50 epochs with batch size 8.