SLIP: Learning to predict in unknown dynamical systems with long-term memory

Authors: Paria Rashidinejad, Jiantao Jiao, Stuart Russell

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical evaluations demonstrate that SLIP outperforms state-of-the-art methods in LDS prediction. Our theoretical and experimental results shed light on the conditions required for efficient probably approximately correct (PAC) learning of the Kalman filter from partially observed data. 7 Experiments We carry out experiments to evaluate the empirical performance of our provable method in three dynamical systems with long-term memory.
Researcher Affiliation Academia Paria Rashidiejad Jiantao Jiao Stuart Russell EECS Department, University of California, Berkeley {paria.rashidinejad, jiantao, russell}@berkeley.edu
Pseudocode Yes Algorithm 1 presents a pseudocode for the SLIP algorithm.
Open Source Code No The paper does not provide any explicit statement about releasing its source code, nor does it include a link to a code repository.
Open Datasets No The paper describes generating data based on simulated systems (System 1, System 2, System 3) with defined parameters (e.g., “System 1 is an scalar LDS with A = B = D = 1, C = Q = R = 0.001, and xt N(0, 2).”). It does not provide concrete access information (link, DOI, formal citation) to a publicly available or open dataset.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, or testing. It mentions “running each experiment independently 100 times” but not data splits.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes For all algorithms, we use k = 20 filters and run each experiment independently 100 times and present the average error with 99% confidence intervals.