Interpretable Sequence Classification via Discrete Optimization

Authors: Maayan Shvo, Andrew C. Li, Rodrigo Toro Icarte, Sheila A. McIlraith9647-9656

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments over a suite of goal recognition and behaviour classification datasets show our learned automata-based classifiers to have comparable test performance to LSTM-based classifiers, with the added advantage of being interpretable.
Researcher Affiliation Academia 1 Department of Computer Science, University of Toronto, Toronto, Canada 2 Vector Institute, Toronto, Canada 3 Schwartz Reisman Institute for Technology and Society, Toronto, Canada
Pseudocode No The paper describes the MILP model verbally and references a technical appendix for more details, but no pseudocode or algorithm block is present in the main text.
Open Source Code Yes The code for DISC is available online1. 1https://github.com/andrewli77/DISC
Open Datasets Yes We considered three goal recognition domains: Crystal Island (Ha et al. 2011; Min et al. 2016)... ALFRED (Shridhar et al. 2020)... MIT Activity Recognition (MIT-AR) (Tapia, Intille, and Larson 2004)... Star Craft (Kantharaju, Onta n on, and Geib 2019)... and on two real-world malware datasets... (Bernardi et al. 2019).
Dataset Splits Yes The probabilities on the right-hand side are estimated using a held-out validation set.
Hardware Specification No No specific hardware details (e.g., CPU, GPU models, memory, or cloud instance types) used for running the experiments are mentioned in the paper.
Software Dependencies No No specific software dependencies or versions (e.g., programming languages, libraries, or solvers with version numbers) are listed in the paper.
Experiment Setup No For detailed discussion of the data we use in experiments, background on Linear Temporal Logic, examples of learned automata classifiers, and details of our experimental setup, the reader is directed to the technical appendix associated with this work (Shvo et al. 2020).