Deep Knowledge Tracing

Authors: Chris Piech, Jonathan Bassen, Jonathan Huang, Surya Ganguli, Mehran Sahami, Leonidas J. Guibas, Jascha Sohl-Dickstein

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test the ability to predict student performance on three datasets: simulated data, Khan Academy Data, and the Assistments benchmark dataset. On each dataset we measure area under the curve (AUC). For the non-simulated data we evaluate our results using 5-fold cross validation and in all cases hyper-parameters are learned on training data. We compare the results of Deep Knowledge Tracing to standard BKT and, when possible to optimal variations of BKT.
Researcher Affiliation Collaboration Chris Piech , Jonathan Bassen , Jonathan Huang , Surya Ganguli , Mehran Sahami , Leonidas Guibas , Jascha Sohl-Dickstein Stanford University, Khan Academy, Google {piech,jbassen}@cs.stanford.edu, jascha@stanford.edu,
Pseudocode No The paper provides mathematical equations for RNNs and LSTMs but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes To facilitate research in DKTs we have published our code and relevant preprocessed data1. 1https://github.com/chrispiech/Deep Knowledge Tracing
Open Datasets Yes Benchmark Dataset: In order to understand how our model compared to other models we evaluated models on the Assistments 2009-2010 skill builder public benchmark dataset2. Assistments is an online tutor that simultaneously teaches and assesses students in grade school mathematics. It is, to the best of our knowledge, the largest publicly available knowledge tracing dataset [11]. 2https://sites.google.com/site/assistmentsdata/home/assistment-2009-2010-data
Dataset Splits Yes For the non-simulated data we evaluate our results using 5-fold cross validation and in all cases hyper-parameters are learned on training data.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper does not provide specific version numbers for software dependencies or libraries used for the experiments. It mentions using RNNs and LSTMs but no specific framework versions.
Experiment Setup Yes For all models in this paper we consistently used hidden dimensionality of 200 and a mini-batch size of 100. To prevent overfitting during training, dropout was applied to ht when computing the readout yt, but not when computing the next hidden state ht+1.