Goal Operations for Cognitive Systems

Authors: Michael Cox, Dustin Dannenhauer, Sravya Kondrakunta

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Computational Experiments We evaluated MIDCA’s goal transformation in a modified blocks world domain. ... Empirical Results We collected data from 260 instances of MIDCA varying resources and number of goals. Figure 3 shows the results of MIDCA using a goal transformation strategy; whereas Figure 4 shows the results with fixed, static goals (i.e., no goal change).
Researcher Affiliation Academia Michael T. Cox, Dustin Dannenhauer, Sravya Kondrakunta, Wright State Research Institute Beavercreek, OH 45431 michael.cox@wright.edu, Lehigh University Bethlehem, PA 18015 dtd212@lehigh.edu, Wright State University Dayton, OH 45435 kondrakunta.2@wright.edu
Pseudocode Yes Table 1. Beta and Choose. Although ' is an ordered set, ' is a sequence where is treated like the set operator and like set difference. Reverse maintains the order of ' (choose inverts it).
Open Source Code No No concrete access to source code for the methodology was provided.
Open Datasets No The paper mentions 'a modified blocks world domain' but does not provide concrete access information for a publicly available or open dataset.
Dataset Splits No No specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning was provided.
Hardware Specification No No specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments were provided.
Software Dependencies No No specific ancillary software details (e.g., library or solver names with version numbers) were provided.
Experiment Setup No The paper describes the experimental domain and varying parameters ('number of resources' and 'number of goals') but does not provide specific experimental setup details such as hyperparameters, optimizer settings, or training configurations.