A Virtual Assistant to Help Dysphagia Patients Eat Safely at Home

Authors: Michael Freed, Brian Burns, Aaron Heller, Daniel Sanchez, Sharon Beaumont-Bowman

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have developed an early prototype for an intelligent assistant that monitors adherence and provides feedback to the patient, and tested monitoring precision with healthy subjects for one strategy called a chin tuck. Results indicate that adaptations of current generation machine vision and personal assistant technologies can effectively monitor chin tuck adherence, and suggest feasibility of a more general assistant that encourages adherence to a range of safe eating strategies. Pilot data (n=5) showed an RMS estimation error of 3.6 degrees.
Researcher Affiliation Collaboration SRI International, Menlo Park California / *Brooklyn College, Brooklyn New York
Pseudocode No The paper describes the prototype's logic in prose but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is open-sourced or publicly available.
Open Datasets No The paper mentions 'Pilot data (n=5)' but does not provide access information (link, DOI, repository, or citation to a public dataset) for this data or any other dataset used.
Dataset Splits No The paper mentions 'Pilot data (n=5)' but does not specify any training, validation, or test dataset splits. The data is implicitly split into 'healthy subjects' for testing, but no explicit percentages, counts, or methods are given.
Hardware Specification No The paper mentions 'running on a standard, camera-equipped laptop' and 'widely available consumer electronics hardware' but does not provide specific hardware details such as CPU/GPU models, memory, or other specifications for reproducibility.
Software Dependencies No The paper references algorithms (e.g., 'Viola and Jones, 2004', 'Sagonas et al., 2013') but does not list specific software dependencies or libraries with version numbers that would be needed to replicate the experiment.
Experiment Setup No The paper describes the functional aspects of the prototype (e.g., 'detects a face within an estimated threshold distance', 'monitors for a large, downward rotation'), but it does not provide specific experimental setup details such as hyperparameter values, model initialization, or detailed training configurations required for reproduction.