Enabling Fast Instruction-Based Modification of Learned Robot Skills
Authors: Tyler Frasca, Bradley Oosterveld, Meia Chita-Tegmark, Matthias Scheutz6075-6083
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | A thorough evaluation of the implemented framework shows the operation of the algorithms integrated in a cognitive robotic architecture on different fully autonomous robots in various HRI case studies. An additional online HRI user study verifies that subjects prefer to quickly modify robot knowledge in the way we proposed in the framework. |
| Researcher Affiliation | Collaboration | 1 Human-Robot Interaction Laboratory, Tufts University, 200 Boston Ave, Medford, MA 02155 2 Thinking Robots, Inc., 12 Channel St STE 202, Boston, MA 02210 |
| Pseudocode | No | The paper defines formal modification operators using mathematical equations (e.g., Oins(α, γ, k, ) := α[αk/γ; αk]), but these are presented as formulas within the text, not as structured pseudocode or an algorithm block. |
| Open Source Code | No | The paper does not provide an unambiguous statement about releasing code or a link to a source-code repository for the described methodology. |
| Open Datasets | No | The paper describes generating '50 randomly generated action scripts' and '450 automatically modified action scripts' for evaluation, but it does not specify that this data is publicly available or provide concrete access information (link, DOI, citation) for a dataset. |
| Dataset Splits | No | The paper does not specify exact train/validation/test dataset splits, sample counts, or reference predefined splits for any dataset used in the experiments. The randomly generated scripts are used for correctness verification, not formal data splitting. |
| Hardware Specification | No | The paper mentions using 'a PR2 and a Nao robot' and 'Willow Garage PR2 robot' and 'Kinova Gen 3 robotic arm' for its case studies and user study, but it does not specify concrete hardware details such as CPU/GPU models, processor types, or memory used for running the experiments or simulations. |
| Software Dependencies | No | The paper states: 'We implemented the framework in the DIARC architecture (Scheutz et al. 2019)' but does not provide specific version numbers for DIARC or any other ancillary software components needed for replication. |
| Experiment Setup | Yes | We evaluated the correctness of the framework implementation on 50 randomly generated action scripts as follows. For each randomly generated action script, we randomly generated 1 to 4 parameters, and 1 to 4 conditions of ran- domly selected type {Π, Ω, Σ, Φ} using Parameter types and arguments: Thing = {Agent,Object} Agent = {andy,tyler,dempster,laura} Object = {ball,mug,table,cup} Direction = {right,left,up,down} Conditions: touching(Thing,Thing), on(Object,Object) holding(Agent,Object), see(Agent,Object) And we recursively created new action steps for the script starting with the following set of initial actions: Initial Actions and Type Signatures: grasp Obj(Agent,Object) move Obj(Agent,Object, Direction) pick Up(Agent,Object) place On(Agent,Object,Object) release Obj(Agent,Object) and then added each newly created action to the set of actions to choose from (call it Actions), so that subsequently generated actions would choose from the initial primitive actions as well as more complex actions. For each action in Actions we then created nine random modifications as follows: we either operated on one of the steps αk within the action, or on one of its four conditions {Πα, Ωα, Σα, Φα}. |