Detecting Student Emotions in Computer-Enabled Classrooms
Authors: Nigel Bosch, Sidney K. D’Mello, Ryan S. Baker, Jaclyn Ocumpaugh, Valerie Shute, Matthew Ventura, Lubin Wang, Weinan Zhao
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We use computer vision, learning analytics, and machine learning to detect students affect in the real-world environment of a school computer lab that contained as many as thirty students at a time. Students moved around, gestured, and talked to each other, making the task quite difficult. Despite these challenges, we were moderately successful at detecting boredom, confusion, delight, frustration, and engaged concentration in a manner that generalized across students, time, and demographics. Our model was applicable 98% of the time despite operating on noisy realworld data. |
| Researcher Affiliation | Academia | Nigel Bosch, Sidney K. D Mello University of Notre Dame, Notre Dame, IN pbosch1@nd.edu, sdmello@nd.edu Ryan S. Baker, Jaclyn Ocumpaugh Teachers College, Columbia University, New York, NY rsb2162@tc.columbia.edu, jo2424@tc.columbia.edu Valerie Shute, Matthew Ventura, Lubin Wang, Weinan Zhao Florida State University, Tallahassee, FL {vshute, mventura, lw10e}@fsu.edu, weinan.zhao@gmail.com |
| Pseudocode | No | The paper describes its methods but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing its own source code or a link to a repository for the methodology described. |
| Open Datasets | No | The paper describes collecting its own dataset from student interactions with Physics Playground and observations using BROMP, but does not provide concrete access information (link, DOI, repository) for this dataset to be publicly available. |
| Dataset Splits | Yes | Models were cross-validated at the student level. Data from 66% of randomly-chosen students were used to train each classifier and the remaining students data were used to test its performance. Each model was trained and tested over 150 iterations. |
| Hardware Specification | No | The paper mentions 'inexpensive webcams' used for data collection but does not specify any hardware details (like CPU, GPU models, or memory) used for running the experiments or training the models. |
| Software Dependencies | No | The paper mentions using 'FACET' and 'WEKA' for model building, but does not provide specific version numbers for these software components or any other libraries. |
| Experiment Setup | Yes | Table 1 provides details such as 'No. Features' and 'Window Size (secs)' for each classification model. The text also mentions optimizing parameters like window size and features, using specific classifiers (C4.5 trees and Bayesian classifiers), and applying RELIEF-F for feature selection. |