Scanpath Complexity: Modeling Reading Effort Using Gaze Information

Authors: Abhijit Mishra, Diptesh Kanojia, Seema Nagar, Kuntal Dey, Pushpak Bhattacharyya

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of our scanpath complexity measure by showing that its correlation with different measures of lexical and syntactic complexity as well as standard readability metrics is better than popular baseline measures based on fixation alone. Our experiment setup is detailed in section 5. Sections 6 is devoted to detailed evaluation of scanpath complexity.
Researcher Affiliation Collaboration Indian Institute of Technology Bombay, India IITB-Monash Research Academy, India IBM Research, India {abhijitmishra, diptesh, pb}@cse.iitb.ac.in {senagar3, kuntadey}@in.ibm.com
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper states: “The dataset can be freely downloaded7 for academic use.” with footnote 7 pointing to “http://www.cfilt.iitb.ac.in/cognitive-nlp”. This refers to the dataset, not the source code for the methodology. No other statements regarding source code availability were found.
Open Datasets Yes We, hence, create an eye-movement dataset which we briefly describe below. ... The dataset can be freely downloaded7 for academic use. 7http://www.cfilt.iitb.ac.in/cognitive-nlp
Dataset Splits Yes We also perform a 10-fold cross validation using to check how effective our complete set of gaze attributes are as opposed to basic fixational and saccadic attributes alone.
Hardware Specification No The paper mentions “SR-Research Eyelink-1000 Plus eyetracker” for data collection, but does not provide any specific details about the computational hardware (e.g., CPU, GPU models, memory) used to run the experiments or models.
Software Dependencies No The paper states: “Scanpath attributes are calculated using Python NUMPY and SCIPY libraries.” and “textual properties are computed using Python NLTK API (Bird 2006), Stanford Core NLP tool (Manning et al. 2014) and tools facilitated by authors of referred papers.” However, no specific version numbers for Python, NUMPY, SCIPY, NLTK, or Stanford Core NLP are provided.
Experiment Setup Yes In Section 5.2 “Choice of NLL Model Parameters”, the paper states: “we fix the value of μr and μp to be 8 and 13 respectively. The shape parameters σp1, σp2, σr1 and σr2 (equation 4) are empirically set to 22, 18, 3, 13 respectively by trial and error, plotting the distribution. Probability of regression (1 ψ) is kept as 0.08...”. Additionally, it mentions: “Scanpath attributes are also normalized for computational suitability. ... by scaling them down to a range of [0,1].”