Collective Biobjective Optimization Algorithm for Parallel Test Paper Generation

Authors: Minh Luan Nguyen, Siu Cheung Hui, Alvis C. M. Fong

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiment results have shown that CBO has drastically outperformed the current techniques in terms of paper quality and runtime efficiency.
Researcher Affiliation Collaboration Minh Luan Nguyen Data Analytics Department Institute for Infocomm Research Singapore; Siu Cheung Hui School of Computer Engineering Nanyang Technological University Singapore; Alvis C. M. Fong School of Computing Science University of Glasgow Scotland, UK
Pseudocode Yes Algorithm 1: Infeasible Allocation Detection; Algorithm 2: Total Quality Maximization
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No As there is no benchmark data available, we generated 4 large-sized synthetic datasets, namely D1, D2, D3 and D4, for performance evaluation.
Dataset Splits No The paper describes generating synthetic datasets and using them to evaluate the algorithm's performance in generating test papers, but it does not specify traditional training/validation/test dataset splits as would be relevant for training a machine learning model.
Hardware Specification No The paper does not provide any specific details regarding the hardware specifications (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers needed to replicate the experiment.
Experiment Setup Yes We vary the parameters in order to have different test criteria in the test specifications. The number of topics is specified between 2 and 40. The total time is set between 20 and 240 minutes, and it is also set proportional to the number of selected topics for each specification. The average difficulty degree is specified randomly between 3 and 9. We have conducted the experiments according to 5 different values of k, i.e., k = 1, 5, 10, 15, 20.