Moral Decision-Making by Analogy: Generalizations versus Exemplars

Authors: Joseph Blass, Kenneth Forbus

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compared the performance of four conditions: (1) MAC/FAC over SAGE generalizations, (2) MAC/FAC over the union of generalizations and cases, (3) MAC/FAC over cases alone; and (4) Best SME match. For brevity, we refer to these as M+G, M+GC, M+C, and Best SME, respectively. Best SME serves as a baseline: since it is exhaustive, it should always provide the most accurate match. The training and test sets were drawn from eight trolley-like problems from Waldmann and Dieterich s (2007) study, which were converted from simplified text to formal representations using EA NLU (Tomai 2009), and were slightly modified by hand (to indicate, for example, that when a trolley hits a bus, the bus passengers die).
Researcher Affiliation Academia Joseph A. Blass, Kenneth D. Forbus Qualitative Reasoning Group, Northwestern University 2133 Sheridan Road, Evanston, IL 60208 USA Contact: joeblass@u.northwestern.edu
Pseudocode No The paper describes the computational models (SME, MAC/FAC, SAGE) conceptually and their processes, but it does not include any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper mentions 'Additional information can be found in the online supplemental material1' with a URL (http://www.qrg.northwestern.edu/papers/aaa15/moraldm-extras.html), but it does not explicitly state that the source code for the described methodology is available there or provide a direct link to a code repository.
Open Datasets Yes The training and test sets were drawn from eight trolley-like problems from Waldmann and Dieterich s (2007) study, which were converted from simplified text to formal representations using EA NLU (Tomai 2009)
Dataset Splits No The paper describes how training sets were constructed (subsets of other cases) and that tests were performed on single cases, but it does not specify a distinct validation set with percentages, counts, or a detailed splitting methodology for model tuning or selection. They use a leave-one-out type of testing without a separate validation split.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions computational models like SME, MAC/FAC, SAGE, and EA NLU, but it does not provide specific version numbers for these software components or any other programming languages or libraries used in the implementation.
Experiment Setup Yes If the similarity score from the top reminding is above the similarity threshold, the case is assimilated into the retrieved generalization. ... In creating SME mappings between cases and generalizations, only facts above a predetermined probability cutoff are used. ... The model performs MAC/FAC over these generalizations (M+G), over the ungeneralized cases (M+C), and over the union of generalizations and cases (M+GC), using the test case as a probe. After retrieval it performs a consistency check on the reminding: if a candidate inference hypothesizes something known to be false, then the mapping is rejected and the model moves to the next best.