Understanding Dropouts in MOOCs

Authors: Wenzheng Feng, Jie Tang, Tracy Xiao Liu517-524

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on two large datasets show that the proposed method achieves better performance than several state-of-the-art methods. We conduct various experiments to evaluate the effectiveness of CFIN on two datasets: KDDCUP and Xuetang X.
Researcher Affiliation Academia Department of Computer Science and Technology, Tsinghua University Department of Economics, School of Economics and Management, Tsinghua University fwz17@mails.tsinghua.edu.cn, jietang@tsinghua.edu.cn, liuxiao@sem.tsinghua.edu.cn *The other authors include Shuhuai Zhang from PBC School of Finance of Tsinghua University and Jian Guan from Xuetang X.
Pseudocode No The paper includes diagrams (e.g., Figure 5 for CFIN architecture) and mathematical formulations, but it does not provide any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes All datasets and codes used in this paper is publicly available at http://www.moocdata.cn.
Open Datasets Yes The analysis in this work is performed on two datasets from Xuetang X. The first dataset contains 39 IPM courses and their enrolled students. It was also used for KDDCUP 2015. 2https://biendata.com/competition/kddcup2015. All datasets and codes used in this paper is publicly available at http://www.moocdata.cn.
Dataset Splits Yes When training the models, we tune the parameters based on 5-fold cross validation (CV) with the grid search, and use the best group of parameters in all experiments.
Hardware Specification No The paper does not specify any hardware components (e.g., CPU, GPU models, memory) used for running the experiments. It only mentions general computing actions like "we implement CFIN with Tensor Flow".
Software Dependencies No The paper mentions "Tensor Flow" and "Adam" as optimization, but does not provide specific version numbers for these or any other software libraries or dependencies.
Experiment Setup Yes For the KDDCUP dataset, the history period and prediction period are set to 30 days and 10 days respectively by the competition organizers. For the Xuetang X dataset, the history period is set to 35 days, prediction period is set to 10 days, i.e., Dh = 35, Dp = 10. We apply L2 regularization on the weight matrices. We adopt Rectified Linear Unit (Relu) as the activation function. All the features are normalized before fed into CFIN. We tune the parameters based on 5-fold cross validation (CV) with the grid search.