Questimator: Generating Knowledge Assessments for Arbitrary Topics
Authors: Qi Guo, Chinmay Kulkarni, Aniket Kittur, Jeffrey P. Bigham, Emma Brunskill
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In a study with 833 participants from Mechanical Turk, we found that participants scores on Questimator-generated quizzes correlated well with their scores on existing online quizzes on topics ranging from philosophy to economics. Also Questimator generates questions with comparable discriminatory power as existing online quizzes. |
| Researcher Affiliation | Academia | Qi Guo, Chinmay Kulkarni, Aniket Kittur, Jeffrey P. Bigham, and Emma Brunskill School of Computer Science, Carnegie Mellon University qiguo@andrew.cmu.edu, {chinmayk, nkittur, jbigham, ebrunskill}@cs.cmu.edu |
| Pseudocode | No | The paper describes the system's steps and processes in textual format (e.g., Section 3.1, 3.2, 3.3, 3.4) but does not include any explicitly labeled pseudocode or algorithm blocks with structured formatting. |
| Open Source Code | No | The paper mentions: 'We maintain an updated corpus of quizzes generated from the most popular Wikipedia articles at https: //crowdtutor.info.' This link is to generated quizzes, not the source code for Questimator itself. No explicit statement of source code release is found. |
| Open Datasets | No | The paper states: 'Quizzes were drawn from MOOCs (Coursera/edX), US university/school board websites, and textbooks by major publishers (e.g., McGraw Hill).' However, it does not provide specific links, DOIs, or formal citations with authors/year for these existing online quizzes or the data derived from them, preventing concrete access. |
| Dataset Splits | No | The paper describes how quizzes were created by combining 10 expert questions and 10 Questimator questions, and how expert questions were sampled. However, it does not explicitly provide training/validation/test dataset splits for any machine learning model or for the evaluation data itself beyond the composition of the quizzes for participants. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as CPU/GPU models, memory, or cloud instances used for running Questimator or its experiments. |
| Software Dependencies | No | The paper mentions tools like 'Word2Vec', 'TGrep', 'skip-thought vectors', and 'IRT model', along with their respective citations. However, it does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | The number of questions to return is configurable (10 by default). For nd (nd = 3 by default) distractors, Questimator intermediately select m <= nd distractor topics (m = 3 by default) to generate the distractor phrases. Our evaluation was a within-subjects experiment with participants drawn from Amazon Mechanical Turk. In all, 833 workers participated. All participants were paid $1 as base for their participation. In addition, they could earn up to $4 as bonuses based on their test scores. |