Disentangling Cognitive Diagnosis with Limited Exercise Labels

Authors: Xiangzhi Chen, Le Wu, Fei Liu, Lei Chen, Kun Zhang, Richang Hong, Meng Wang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on widely used benchmarks demonstrate the superiority of our proposed model.
Researcher Affiliation Academia Hefei University of Technology Tsinghua University Institute of Dataspace, Hefei Comprehensive National Science Center Institute of Artificial Intelligence, Hefei Comprehensive National Science Center
Pseudocode Yes Algorithm 1 Filling Missing Q-matrix for Interpretable Baselines
Open Source Code Yes Our code is available at https://github.com/kervias/DCD
Open Datasets Yes Our experiments are conducted on three real-world datasets, i.e., Matmat2, Junyi [4] and NIPS2020EC [42], all of which contain knowledge concepts of the tree structure. 2https://github.com/adaptive-learning/matmat-web 3Our code is available at https://github.com/kervias/DCD
Dataset Splits Yes We adopt five-fold cross-validation to avoid randomness.
Hardware Specification Yes We train our model with Python 3.9 and Py Torch 1.12.1 on NVIDIA RTX A5000.
Software Dependencies Yes We train our model with Python 3.9 and Py Torch 1.12.1 on NVIDIA RTX A5000.
Experiment Setup Yes We set different hyperparameters to balance each loss function. The final object function can be summarized as follow: arg min Θ=[ϕu,ϕdv,ϕrv] L = Lm + αLl + Lul + z {zu,zdv,zrv} Li d(z)) + X z {zu,zdv,zrv} Lp(z), where Θ = [ϕu, ϕd v, ϕr v] is the parameter set in the whole model, α is the hyperparameter for alignment of labeled exercises, and βi denotes the weight for disentanglement term corresponding to the i-th level in the knowledge concept tree. We set a prior Gaussian distribution with N(0, 1) for each latent factor in µu and µd v, and a prior Bernoulli distribution with Bernoulli(0.2) for each latent factor in µr v. The default margin is 0.5.