Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Combining Generative and Discriminative Models for Hybrid Inference
Authors: Victor Garcia Satorras, Zeynep Akata, Max Welling
NeurIPS 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We apply our ideas to the Kalman filter, a Gaussian hidden Markov model for time sequences, and show, among other things, that our model can estimate the trajectory of a noisy chaotic Lorenz Attractor much more accurately than either the learned or graphical inference run in isolation. |
| Researcher Affiliation | Collaboration | Victor Garcia Satorras Uv A-Bosch Delta Lab University of Amsterdam Netherlands EMAIL Zeynep Akata Cluster of Excellence ML University of Tübingen Germany EMAIL Max Welling Uv A-Bosch Delta Lab University of Amsterdam Netherlands EMAIL |
| Pseudocode | No | No. The paper describes the model through equations and textual descriptions but does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | 2Available at: https://github.com/vgsatorras/hybrid-inference |
| Open Datasets | Yes | To demonstrate the generalizability of our Hybrid model to real world datasets, we use the Michigan NCLT [6] dataset which is collected by a segway robot moving around the University of Michigan s North Campus. |
| Dataset Splits | Yes | We sample two different motion trajectories from 50 to 100K time steps each, one for validation and the other for training. An additional 10K time steps trajectory is sampled for testing. |
| Hardware Specification | No | No. The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | No. The paper mentions software components like 'Adam optimizer', 'MLPs', and 'GRU', but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | We set γ = 0.005 and use the Adam optimizer with a learning rate 10 3. The number of inference iterations used in the Hybrid model, GNN-messages and GM-messages is N=50. fe and fdec are a 2-layers MLPs with Leaky Relu and Relu activations respectively. The number of features in the hidden layers of the GRU, fe and fdec is nf=48. |