DGCD: An Adaptive Denoising GNN for Group-level Cognitive Diagnosis

Authors: Haiping Ma, Siyu Song, Chuan Qin, Xiaoshan Yu, Limiao Zhang, Xingyi Zhang, Hengshu Zhu

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, extensive experiments conducted on four real-world educational datasets clearly demonstrate the effectiveness of our proposed DGCD model.
Researcher Affiliation Collaboration Haiping Ma1 , Siyu Song1,2 , Chuan Qin2,3 , Xiaoshan Yu4 , Limiao Zhang1 , Xingyi Zhang5 and Hengshu Zhu2 1Information Materials and Intelligent Sensing Laboratory of Anhui Province, Institutes of Physical Science and Information Technology, Anhui University, Hefei, Anhui, China 2Career Science Lab, Boss Zhipin, Beijing, China 3PBC School of Finance, Tsinghua University, Beijing China 4School of Artificaial Intelligence, Anhui University, Hefei, Anhui, China 5School of Computer Science and Technology, Anhui University, Hefei, Anhui, China
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/BIMK/IntelligentEducation/tree/main/DGCD.
Open Datasets Yes We conduct the experiments on four public education benchmarks, including ASSIST12 [Feng et al., 2009], NIPS Edu [Wang et al., 2020b], SLPbio [Lu et al., 2021], and SLPmath [Lu et al., 2021].
Dataset Splits No Each dataset of group-exercise responses is divided randomly into two subsets: 80% for training and 20% for testing. The paper mentions hyperparameter optimization but does not explicitly describe a separate validation set split.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU models, or memory specifications used for running experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., 'PyTorch 1.9', 'Python 3.8') needed to replicate the experiment.
Experiment Setup Yes In our DGCD, we set the dimension d of the vector to be the number of knowledge concepts. The number of the GNN layers in the representation learning module is set to 2. The number of diagnostic layers is set to 3. Additionally, the hyper-parameter λkl was searched in [1,1e-1,1e-2,1e-3,1e-4,1e-5,1e-6,1e-7]. And, t was optimized over the values [0.57,0.67,0.77,0.87].