Enhancing Cognitive Diagnosis Using Un-interacted Exercises: A Collaboration-Aware Mixed Sampling Approach
Authors: Haiping Ma, Changqian Wang, Hengshu Zhu, Shangshang Yang, Xiaoming Zhang, Xingyi Zhang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness and interpretability of our framework through comprehensive experiments on realworld datasets. |
| Researcher Affiliation | Collaboration | 1Department of Information Materials and Intelligent Sensing Laboratory of Anhui Province, Institutes of Physical Science and Information Technology, Anhui University, China 2Career Science Lab, BOSS Zhipin, China 3 School of Artificial Intelligence, Anhui University, China 4School of Computer Science and Technology, Anhui University, China |
| Pseudocode | No | The paper does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | 1https://github.com/WangCQ206/IntelligentEducation/tree/main/CMES |
| Open Datasets | Yes | We conduct experiments on two real-world datasets ASSISTments (Feng, Heffernan, and Koedinger 2009) and Math, which both provide student-exercise interaction records and the exercise-knowledge concept relational matrix. |
| Dataset Splits | Yes | We apply 70% : 10% : 20% training/validation/test split for each student s response logs in the two datasets. |
| Hardware Specification | Yes | All models are implemented in Pytorch, and all experiments are conducted on Linux servers with Tesla V100. |
| Software Dependencies | No | The paper mentions 'Pytorch' as the implementation framework and 'Linux servers' but does not provide specific version numbers for these or any other key software components, such as 'Pytorch 1.9' or a specific Linux distribution version. |
| Experiment Setup | Yes | We first initialized all the parameters in the networks with Xavier (Glorot and Bengio 2010) initialization and used the Adam (Kingma and Ba 2014) optimizer with a fixed batch size of 256 during the training process. For the multi-dimensional models... we set the dimensions of latent features for both students and exercises to be equal to the number of knowledge concepts, i.e., 123 for ASSISTments and 61 for MATH datasets. Based on the parameter tuning, we set n to 20 for ASSISTments and 5 for Math respectively; we set W to 50 and 20 for ASSISTments and Math respectively. |