Dempster-Shafer Theoretic Learning of Indirect Speech Act Comprehension Norms
Authors: Ruchen Wen, Mohammed Aun Siddiqui, Tom Williams10410-10417
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiment 1: Generating Norms, Experiment 2: Detecting Norms, Experimental Results |
| Researcher Affiliation | Academia | Ruchen Wen, Mohammed Aun Siddiqui, Tom Williams MIRRORLab Department of Computer Science Colorado School of Mines {rwen, twilliams}@mines.edu |
| Pseudocode | No | The paper describes the norm learning algorithm used but does not provide pseudocode or an algorithm block. |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The paper describes data collected through human subject experiments (Amazon Mechanical Turk and college campus participants) but does not provide concrete access information (link, DOI, repository, or citation) for this collected dataset. |
| Dataset Splits | No | The paper describes categorizing collected data into different 'norm frames' with varying numbers of data instances for learning uncertainty intervals, but it does not specify explicit train/validation/test dataset splits with percentages or sample counts for model evaluation. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, memory, or cloud instance types) used for running its experiments. |
| Software Dependencies | No | The paper mentions software components like 'psi Turk framework' and 'Sarathy, Scheutz, and Malle (2017) s rule learning algorithm' but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | No | The paper describes the setup for human subject experiments and data collection, but it does not provide specific experimental setup details such as hyperparameters, optimizer settings, or system-level training configurations for the learning algorithm. |