Towards Accurate and Fair Cognitive Diagnosis via Monotonic Data Augmentation
Authors: zheng zhang, Wei Song, Qi Liu, Qingyang Mao, Yiyan Wang, Weibo Gao, Zhenya Huang, Shijin Wang, Enhong Chen
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, extensive experiments on real-world datasets showcase the efficacy of our framework in addressing the data sparsity issue with accurate and fair CD results. |
| Researcher Affiliation | Collaboration | 1: University of Science and Technology of China 2: State Key Laboratory of Cognitive Intelligence 3: Beijing Normal University {zhangzheng,sw2,maoqy0503,weibogao}@mail.ustc.edu.cn; {qiliuql,huangzhy,cheneh}@ustc.edu.cn; wangyiyan@mail.bnu.edu.cn; sjwang3@ifytek.com |
| Pseudocode | No | The paper describes the proposed framework and method verbally and mathematically but does not include any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The code is released at https://github.com/Mercidaiha/CMCD. |
| Open Datasets | Yes | The ASSIST dataset (ASSISTments 2009-2010 skill builder") is an open dataset collected by the ASSISTments online tutoring systems [12] |
| Dataset Splits | No | Regarding the dataset division, we allocate 80% of each student s response log for training and the remaining 20% for testing. There is no explicit mention of a separate validation split percentage or size within the experimental setup details. |
| Hardware Specification | Yes | We implement all models with Py Torch and conduct all experiments on four 2.0GHz Intel Xeon E5-2620 CPUs and a Tesla K20m GPU. |
| Software Dependencies | No | We implement all models with Py Torch. No specific version number for PyTorch or other software dependencies is provided. |
| Experiment Setup | Yes | For all models, we set the learning rate to 0.001 and the dropout rate to 0.2. We apply Adam as the optimization algorithm to update the model parameters. |