Fully Adaptive Framework: Neural Computerized Adaptive Testing for Online Education

Authors: Yan Zhuang, Qi Liu, Zhenya Huang, Zhi Li, Shuanghong Shen, Haiping Ma4734-4742

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world datasets demonstrate the effectiveness and robustness of NCAT compared with several state-of-the-art methods.
Researcher Affiliation Academia 1Anhui Province Key Laboratory of Big Data Analysis and Application, School of Data Science & School of Computer Science and Technology, University of Science and Technology of China 2Anhui University
Pseudocode No The paper describes the methodology using text and diagrams but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/bigdata-ustc/NCAT
Open Datasets Yes We use three real-world educational datasets, namely ASSIST, EXAM, and NIPS-EDU. ... NIPS-EDU (Wang et al. 2020b) refers to the dataset in Neur IPS 2020 Education Challenge. ... the datasets can be found in https://github.com/bigdata-ustc/Edu Data.
Dataset Splits Yes We perform 5fold cross validation for all datasets; for each fold, we use 60%-20%-20% students for training1, validation, and testing respectively. Furthermore, we partition the questions responded to by each student into the support set (Di s 70%) and query set (Di u, 30%).
Hardware Specification Yes All methods are developed and trained on a Tesla K20m GPU.
Software Dependencies No The paper does not provide specific version numbers for software dependencies such as libraries or programming languages used for implementation.
Experiment Setup Yes We set the embedding size d = 128 and the learning rate in RL algorithm to 0.001. The temperature parameter ν in Eq.(6) is set to 2 0.1t which is slowly reduced during test. The capacity of the replay buffer for Q-learning is set to 10000 in experiments. The exploration factor ϵ decays from 1 to 0 during training.