Scalable approximate Bayesian inference for particle tracking data
Authors: Ruoxi Sun, Liam Paninski
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We apply the method to simulated and real data. We show that the method robustly performs approximate Bayesian inference on the observed data, and provides more accurate results than competing methods that output just a single best path. |
| Researcher Affiliation | Academia | 1Department of Biological Sciences; 2Departments of Statistics and Neuroscience; Grossman Center for the Statistics of Mind; Center for Theoretical Neuroscience; Columbia University. |
| Pseudocode | Yes | Algorithm 1 Conditional sampling network |
| Open Source Code | Yes | Code is available here. |
| Open Datasets | Yes | The data are TIR-FM imaged clathrin-coated pits in a BSC1 cell (Jaqaman et al., 2008). We trained the network on simulated data whose parameters (signal-to-noise ratio, particle density and speed, psf width, etc.) were coarsely matched to the real data; see the comparison video for details. |
| Dataset Splits | No | The paper mentions training and testing data but does not explicitly describe a validation set or specific splits for reproducibility. |
| Hardware Specification | No | The paper mentions training times ('on the order of hours') and discusses network architectures but does not specify any hardware details like GPU/CPU models or memory used for experiments. |
| Software Dependencies | No | We trained the network (using default learning rate settings in Keras) to minimize the binary cross-entropy between the target mask (zero ex-cept at si t, or all zeros if all the particles in qt were already sampled and no further particles should be added) and the network s output probability mask. |
| Experiment Setup | Yes | We trained the network (using default learning rate settings in Keras) to minimize the binary cross-entropy between the target mask (zero ex-cept at si t, or all zeros if all the particles in qt were already sampled and no further particles should be added) and the network s output probability mask. We use M = 2 throughout this paper. |