Learning Hybrid Models with Guarded Transitions

Authors: Pedro Santana, Spencer Lane, Eric Timmons, Brian Williams, Carlos Forster

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments indicate that guarded PHA models can yield significant performance improvements when used by hybrid state estimators, particularly when diagnosing the true discrete mode of the system, without any noticeable impact on their real-time performance.
Researcher Affiliation Academia Massachusetts Institute of Technology, Computer Science and Artificial Intelligence Laboratory, MERS 32 Vassar St. Room 32-224, Cambridge, MA 02139, {psantana,slane,etimmons,williams}@mit.edu +Instituto Tecnol ogico de Aeron autica Pc . Mal. Eduardo Gomes, 50. Vl. das Ac acias, 12228-900-S ao Jos e dos Campos, SP, Brazil, forster@ita.br
Pseudocode No The paper describes the E-step and M-step of the EM algorithm in detail with mathematical equations, but it does not present them in a structured pseudocode or algorithm block format.
Open Source Code No The paper does not provide any statement or link indicating that open-source code for the described methodology is available.
Open Datasets No The paper describes using data from a 'pedagogical switched RC circuit example' and 'two different target-tracking domains' (lawnmower pattern, random Markovian transition), but it does not provide concrete access information (e.g., links, DOIs, specific repositories, or formal citations with author/year) for these datasets, nor does it refer to them as well-known public datasets with direct access.
Dataset Splits No The paper mentions running IMM '32 times, discarding the best and worst results' but does not specify any training, validation, or test dataset splits or cross-validation schemes.
Hardware Specification No The paper does not mention any specific hardware used for running the experiments, such as GPU/CPU models, memory, or cloud computing instances with specifications.
Software Dependencies No The paper states, 'Our algorithm was implemented in Python and multi-class SVM s were trained using wrappers for LIBSVM (Chang and Lin 2011) in Scikit-learn (Pedregosa et al. 2011).' However, it does not provide specific version numbers for Python, LIBSVM, or Scikit-learn, which are required for full reproducibility.
Experiment Setup Yes We initialized EM with 40% misclassified modes and learned a PHA model using 1000 data points. The minimum and maximum output voltages were set to Vmin=3.0V and Vmax=4.0V , respectively. The input voltage was Vin=10V . Our SVMs were trained using linear feature vectors and were allowed slack with very high penalty for misclassified points. ... For each one of these domains, we ran IMM 32 times, discarding the best and worst results...