Demo: Assisting Visually Impaired People Navigate Indoors

Authors: J. Pablo Muñoz, Bing Li, Xuejian Rong, Jizhong Xiao, Yingli Tian, Aries Arditi

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this demo, we show a system that uses state-of the-art technology to assist visually impaired people navigate indoors. Our system takes advantage of spatial representations from CAD files, or floor plan images, to extract valuable information that later can be used to improve navigation and human-computer interaction. Using depth information, our system is capable of detecting obstacles and guiding the user to avoid them.
Researcher Affiliation Collaboration J. Pablo Mu noz City University of New York, NY jmunoz2@gradcenter.cuny.edu Bing Li, Xuejian Rong Jizhong Xiao, Yingli Tian City College of New York, NY {bli, jxiao, xrong, ytian}@ccny.cuny.edu Aries Arditi Visibility Metrics Chappaqua, NY arditi@visibilitymetrics.com
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. It provides mathematical equations but no algorithm steps.
Open Source Code No Videos of our system can be found at: http://www.assistiverobot.org/ain/. The paper provides a link to project videos, but does not explicitly state that source code for the methodology is available or provide a link to a code repository.
Open Datasets No The paper refers to using data from 'CAD files, or floor plan images' and 'Google s Project Tango handheld device' which produces an 'Area Description File (ADF) file'. However, it does not provide concrete access information (link, DOI, formal citation) for any publicly available or open dataset.
Dataset Splits No The paper does not specify exact dataset split percentages or sample counts for training, validation, or testing.
Hardware Specification Yes Our system uses the localization capabilities of Google s Project Tango handheld device.
Software Dependencies No The paper mentions using 'well-known Artificial Intelligence algorithms, e.g., A*', and 'Voice-Recognition, Gesture Detection, and Text-to-Speech capabilities of handheld devices', but does not provide specific version numbers for any software dependencies.
Experiment Setup No The paper describes the functional components and logic of the system, but it does not provide specific experimental setup details such as hyperparameter values or training configurations.