Algorithmic Exam Generation
Authors: Omer Geiger, Shaul Markovitch
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section we present an empirical evaluation of the complete MOEG framework over a procedural domain. |
| Researcher Affiliation | Academia | Department of Computer Science Technion Israel Institue of Technology, 32000 Haifa, Israel |
| Pseudocode | Yes | Figure 1: Pseudo-code for action landmark approximation method |
| Open Source Code | No | The paper does not provide any statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | No | For this algebraic domain we devised a question generating algorithm. ... A set of 160 questions used for evaluation, Q, was produced by applying the described procedure 10 times with each (w, d) value pair in {0, 1, 2, 3}2. The paper does not provide concrete access information for this dataset. |
| Dataset Splits | Yes | We use an oracle sample for evaluation. Two things are important with regard to this oracle sample: first, it is taken independently from the utility sample, and second, it is considerably larger. ... Default values, used unless stated otherwise, are sample Size = 400, ... A sample of size 1000 was used for the oracle. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies or version numbers. |
| Experiment Setup | Yes | Default values, used unless stated otherwise, are sample Size = 400, ke = 10, ϵp = 0.15, ϵw = 0.5. A sample of size 1000 was used for the oracle. ... parameters: Dlim = 40, SOLlim = 100, Tlim = 300 sec. |