Filtering with Abstract Particles

Authors: Jacob Steinhardt, Percy Liang

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, our method outperforms beam search and sequential Monte Carlo on both a text reconstruction task and a multiple object tracking task.
Researcher Affiliation Academia Jacob Steinhardt JSTEINHARDT@CS.STANFORD.EDU Percy Liang PLIANG@CS.STANFORD.EDU Stanford University, 353 Serra Street, Stanford, CA 94305 USA
Pseudocode Yes Algorithm 1 Abstract beam search algorithm. Inputs are the space X, a refinement function r, a fitting method Fit, and a beam size k. Generates a sequence of hierarchical decompositions, each of which defines a distribution over X.
Open Source Code No The paper does not provide an explicit statement about the availability of its source code or a link to a repository.
Open Datasets Yes For the text reconstruction task, we fit an n-gram model for the transitions using interpolated Kneser-Ney (Kneser & Ney, 1995) trained on The Complete Works of William Shakespeare (about 125, 000 lines in total).
Dataset Splits Yes The first 115, 000 lines were used to train the model and each of the next 5, 000 lines were used as a development and test set, respectively.
Hardware Specification Yes Runtime was computed using a single core of a 3.4GHz machine with 32GB of RAM.
Software Dependencies No The paper does not specify any software libraries, frameworks, or their version numbers used in the implementation of the experiments.
Experiment Setup Yes We fit these by minimizing perplexity on the development set, and found that n = 8, λ = 0.9 was optimal.