Instructable Intelligent Personal Agent

Authors: Amos Azaria, Jayant Krishnamurthy, Tom Mitchell

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental A user study involving email tasks demonstrates that users voluntarily teach LIA new commands, and that these taught commands significantly reduce task completion time.
Researcher Affiliation Collaboration 1 Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213 2 Allen Institute for Artificial Intelligence, Seattle, WA 98103
Pseudocode Yes Pseudocode for lexicon induction is provided as Algorithm 1.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the methodology is openly available.
Open Datasets No The paper mentions 'LIA s semantic parser has over 300 lexicon entries, 14 unary rules, and was trained using 150 training examples', but it does not state that this dataset is publicly available or provide a link/citation to it.
Dataset Splits No The paper mentions training data for the semantic parser ('trained using 150 training examples') and a user study, but it does not specify explicit training, validation, and test dataset splits with percentages or counts for model reproduction.
Hardware Specification No The paper does not specify any particular hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper describes the software components of LIA (e.g., CCG semantic parser, back-end), but it does not provide specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup No The 'Experimental Setup' section details the user study methodology (tasks, subjects, questionnaires) but does not include specific hyperparameters or system-level training settings for the underlying machine learning model (e.g., learning rate, batch size for the semantic parser).