Natural Language Processing for Enhancing Teaching and Learning

Authors: Diane Litman

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental As shown in Figure 2, research in applying natural language processing to education typically follows an iterative lifecycle. Technological innovation is first motivated by and later addresses societal need. Technological innovation similarly is first informed by and later contributes to educationally-relevant theories and data. Starting at the upper right of the figure, a research problem in the area of NLP for educational applications is usually inspired by a real-world student or teacher need. For example, given the enormous student/instructor ratio in MOOCs, it is difficult for an instructor to read all the posts in a MOOC s discussion forums; can NLP instead identify the posts that require an instructor s intervention? Next, progressing to the bottom of the figure, constraints on solutions to the problem are formulated by taking into account relevant theory or datadriven findings from the literature. For example, even before MOOCs, there was a pedagogical literature regarding instructor intervention. Finally, progressing to the upper left of the figure, an NLP-based technology is designed, implemented, and evaluated. Based on an error analysis, the cycle likely iterates.A series of experimental evaluations demonstrated that our technologies for adapting to student uncertainty over and above answer correctness (Forbes-Riley and Litman 2011), as well as further adapting to student disengagement over and above uncertainty (Forbes-Riley and Litman 2012) could improve student learning and other measures of tutorial dialogue system performance.
Researcher Affiliation Academia Diane Litman Department of Computer Science & Learning Research and Development Center & Intelligent Systems Program University of Pittsburgh Pittsburgh, PA 15260
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access to source code for the methodologies described within the paper, nor does it state that the code will be made available.
Open Datasets Yes Recent shared academic tasks have included student response analysis6 (Dzikovska et al. 2013), grammatical error detection7 (Ng et al. 2014), and prediction of MOOC attrition from discussion forums8 (Ros e and Siemens 2014). There have also been highly visible competitions sponsored by the Hewlett Foundation in the areas of essay9 and shortanswer response10 scoring. (Footnotes: 6https://www.cs.york.ac.uk/semeval-2013/task7/ 7http://www.comp.nus.edu.sg/ nlp/conll14st.html 8http://emnlp2014.org/workshops/MOOC/call.html 9https://www.kaggle.com/c/asap-aes 10https://www.kaggle.com/c/asap-sas)
Dataset Splits No The paper mentions "train", "validation", and "test" in the context of general machine learning concepts and evaluations by others (e.g., "NLP tools have been trained on professionally written texts", "The development of pedagogically-oriented dialogue systems has thus generated many interesting research challenges"), but it does not provide specific train/validation/test dataset splits needed to reproduce any experiment described in this paper.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running experiments.
Software Dependencies No The paper discusses various NLP tools and systems generally but does not provide specific software dependencies or version numbers (e.g., library names with version numbers) needed to replicate any experiment.
Experiment Setup No The paper discusses general approaches and challenges in NLP for education but does not provide specific experimental setup details, such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or system-level training settings.