Set-to-Sequence Ranking-Based Concept-Aware Learning Path Recommendation

Authors: Xianyu Chen, Jian Shen, Wei Xia, Jiarui Jin, Yakun Song, Weinan Zhang, Weiwen Liu, Menghui Zhu, Ruiming Tang, Kai Dong, Dingyin Xia, Yong Yu

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on two real-world public datasets and one industrial dataset, and the experimental results demonstrate the superiority and effectiveness of SRC.
Researcher Affiliation Collaboration Xianyu Chen1, Jian Shen1, Wei Xia2, Jiarui Jin1, Yakun Song1, Weinan Zhang1 , Weiwen Liu2, Menghui Zhu1, Ruiming Tang2, Kai Dong3, Dingyin Xia3, Yong Yu1* 1 Shanghai Jiao Tong University 2 Huawei Noah s Ark Lab 3 Huawei Technologies Co Ltd {xianyujun,r ocky,jinjiarui97,ereboas,wnzhang,zerozmi7}@sjtu.edu.cn, yuyong@apex.sjtu.edu.cn {xiawei24,liuweiwen8,tangruiming,dongkai4,xiadingyin}@huawei.com
Pseudocode Yes Algorithm 1: SRC
Open Source Code Yes Code now is available at https://gitee.com/mindspore/models/ tree/master/research/recommend/SRC.
Open Datasets Yes Our experiments are performed on two real-world public datasets: ASSIST091 (Feng, Heffernan, and Koedinger 2009) and Junyi2 (Chang, Hsu, and Chen 2015). 1https://sites.google.com/site/assistmentsdata/home/20092010-assistment-data 2https://www.kaggle.com/datasets/junyiacademy/learningactivity-public-dataset-by-junyi-academy
Dataset Splits No The paper mentions training a KT model on static data and evaluating the main model, but it does not specify explicit training, validation, and test splits (e.g., percentages or sample counts) for the datasets used in its experiments.
Hardware Specification Yes All the models are trained under the same hardware settings with 16-Core AMD Ryzen 9 5950X (2.194GHZ), 62.78GB RAM, and NVIDIA Ge Force RTX 3080 cards.
Software Dependencies No The paper mentions 'Mind Spore (Mind Spore 2022), CANN (Compute Architecture for Neural Networks), and Ascend AI Processor' but does not provide specific version numbers for these or other relevant software dependencies used in the experiments.
Experiment Setup Yes The learning rate is decreased from the 1 10 3 to 1 10 5 during the training process. The batch size is set as 128. The weight for L2 regularization term is 4 10 5. The dropout rate is set as 0.5. The dimension of embedding vectors is set as 64.