BOBCAT: Bilevel Optimization-Based Computerized Adaptive Testing
Authors: Aritra Ghosh, Andrew Lan
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments on five real-world student response datasets, we show that BOBCAT outperforms existing CAT methods (sometimes significantly) at reducing test length. |
| Researcher Affiliation | Academia | Aritra Ghosh and Andrew Lan University of Massachusetts Amherst {arighosh,andrewlan}@cs.umass.edu |
| Pseudocode | Yes | Algorithm 1 BOBCAT training process |
| Open Source Code | Yes | Our implementation will be publicly available at https://github.com/arghosh/BOBCAT. |
| Open Datasets | Yes | We use five publicly available benchmark datasets: Ed Net2, Junyi3, Eedi-1, Eedi-24, and ASSISTments5. ... 2https://github.com/riiid/ednet 3https://www.kaggle.com/junyiacademy/learning-activitypublic-dataset-by-junyi-academy 4https://eedi.com/projects/neurips-education-challenge 5https://sites.google.com/site/assistmentsdata/home/assistment2009-2010-data |
| Dataset Splits | Yes | We perform 5-fold cross validation for all datasets; for each fold, we use 60%-20%-20% students for training, validation, and testing, respectively. |
| Hardware Specification | Yes | We implement all methods in Py Torch and run our experiments in a NVIDIA Titan X/1080Ti GPU. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not specify a version number for it or other software dependencies. |
| Experiment Setup | Yes | For Bi NN, we use a two-layer, fully-connected network (with 256 hidden nodes, Re LU nonlinearity, 20% dropout rate, and a final sigmoid output layer)... We use another fully-connected network (with two hidden layers, 256 hidden nodes, Tanh nonlinearity, and a final softmax output layer) as the question selection algorithm. |