ATUCAPTS: Automated Tests that a User Cannot Pass Twice Simultaneously

Authors: Garrett Andersen, Vincent Conitzer

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We propose a specific class of ATUCAPTS and present the results of a human subjects study to validate that they satisfy the two properties above.
Researcher Affiliation Academia Garrett Andersen and Vincent Conitzer Department of Computer Science, Duke University Durham, NC, USA {garrett, conitzer}@cs.duke.edu
Pseudocode No The paper describes the detailed approach for the test, but it does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions 'We thank Eric Hu for writing code for the software used in the human subjects experiment', but it does not include an unambiguous statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper describes a human subjects study where 25 subjects were recruited and participated directly in an experiment, rather than using a pre-existing publicly available dataset. No concrete access information (link, DOI, repository, or formal citation) for a dataset is provided.
Dataset Splits No The paper describes a human subjects study with two experimental phases (single test vs. two simultaneous tests) but does not provide specific training/test/validation dataset splits, sample counts, or cross-validation details as would be applicable for data partitioning.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running the experiments.
Software Dependencies No The paper mentions that software was written for the human subjects experiment but does not provide any specific ancillary software details, such as library or solver names with version numbers.
Experiment Setup Yes Various parameters need to be set to get to a working instantiation of our test, namely the following. 1. The number of boxes. We set this to 6. 2. The speed of the boxes... 3. The pattern of motion of the boxes... 4. The time allocated to each query... 5. Feedback to the user during the test.