Further Results on Predicting Cognitive Abilities for Adaptive Visualizations

Authors: Cristina Conati, Sébastien Lallé, Md. Abed Rahman, Dereck Toker

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also evaluate how quality of eye tracking data impacts prediction accuracy. There are promising results on the value of eye tracking data for predicting a variety of user states and abilities in user modeling (e.g., [Bednarik et al. 2013; Kardan and Conati 2012; Jaques et al. 2014; Ooms et al. 2014; Gingerich and Conati 2015; Lallé et al. 2016]).
Researcher Affiliation Academia Cristina Conati, Sébastien Lallé, Md. Abed Rahman, Dereck Toker Department of Computer Science The University of British Columbia, Vancouver, B.C., Canada {conati, lalles, abed90, dtoker}@cs.ubc.ca
Pseudocode No The paper describes the classification experiments and algorithms used but does not provide any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper mentions using 'EMDAT (https://github.com/ATUAV/EMDAT)' which is a third-party eye tracking data analysis toolkit, but it does not provide source code for its own described methodology or implementations.
Open Datasets Yes The data used in this paper was collected during a user study (mentioned in the related work and fully described in [Lallé et al. 2017]) that investigated the impact of individual differences on user experience and gaze behavior with Metro Quest (MQ).
Dataset Splits Yes The binary labels were generated by dividing participants into High and Low groups for each characteristic (e.g., High and Low perceptual speed), based on a median split on the test scores2 from the study. We compared against a majority-class baseline two classification algorithms available in the CARET package [Kuhn 2008] in R: Boosted logistic regression (LB); and Random forest (RF). Classifier performance is measured by their accuracy (proportion of correct predictions). We focus on these algorithms because in previous work they produced good results for predicting various user states during visualization processing, e.g., [Steichen et al. 2014; Lallé 2016]. For all combinations of user characteristics (5), window lengths (10) and validity thresholds (3), LB, RF and the appropriate baselines were trained and evaluated in 10-fold cross validation over users, namely at each fold users in the test set do not appear in the training set.
Hardware Specification No The paper mentions the 'Tobii T120' eye tracker used for data collection, but it does not specify the hardware (e.g., CPU, GPU models, memory) used for running the classification experiments or model training.
Software Dependencies No The paper mentions using 'EMDAT' and the 'CARET package [Kuhn 2008] in R' for analysis and classification, but it does not provide specific version numbers for these software components or the R environment.
Experiment Setup Yes For all combinations of user characteristics (5), window lengths (10) and validity thresholds (3), LB, RF and the appropriate baselines were trained and evaluated in 10-fold cross validation over users, namely at each fold users in the test set do not appear in the training set. The process was repeated 25 times (runs) to strengthen the stability and reproducibility of the results.