JUMP-Means: Small-Variance Asymptotics for Markov Jump Processes
Authors: Jonathan Huggins, Karthik Narasimhan, Ardavan Saeedi, Vikash Mansinghka
ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments demonstrate that JUMP-means is competitive with or outperforms widely used MJP inference approaches in terms of both speed and reconstruction accuracy. |
| Researcher Affiliation | Academia | Jonathan H. Huggins* JHUGGINS@MIT.EDU Karthik Narasimhan* KARTHIKN@MIT.EDU Ardavan Saeedi* ARDAVANS@MIT.EDU Vikash K. Mansinghka VKM@MIT.EDU Computer Science and Artificial Intelligence Laboratory, MIT |
| Pseudocode | No | The paper describes algorithms in numbered step-by-step prose within sections titled 'Algorithm' (e.g., Section 3.3, 4.1), but does not present them in a structured pseudocode block format. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | For our experiments, we use a real-world dataset collected from a phase III clinical trial of a drug for MS. (Mandel, 2010). We use data from the MIMIC database (Goldberger et al., 2000; Moody & Mark, 1996) |
| Dataset Splits | No | The paper mentions holding out data for testing but does not explicitly specify a separate validation dataset split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions implementation languages like 'Java' and 'Python' but does not provide specific version numbers for them or any other software dependencies. |
| Experiment Setup | Yes | We set the hyperparameters ξ, ξλ, and µλ equal to 1, 1, and .5, respectively. (Section 5.1); The hyperparameters γ and ξ1 are set to 5, while ζ, ξ, and ξ2 are set to 0.005. (Section 5.2) |