Language to Action: Towards Interactive Task Learning with Physical Agents
Authors: Joyce Y. Chai, Qiaozi Gao, Lanbo She, Shaohua Yang, Sari Saba-Sadiya, Guangyue Xu
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results have shown that both approaches significantly improve argument grounding performance. ... our empirical results have shown that the web data can be used to complement a small number of seed examples ... Our empirical results have shown that the hypothesis space representation of grounded semantics significantly outperforms the single hypothesis representation. ... Our empirical studies have shown that, as expected, the time taken for teaching is significantly higher in the one-step-at-a-time setting. ... Our results have shown that the policy learned from RL leads to not only more efficient interaction but also better models for the grounded verb semantics. |
| Researcher Affiliation | Collaboration | Joyce Y. Chai1, Qiaozi Gao1, Lanbo She2, Shaohua Yang1, Sari Saba-Sadiya1, Guangyue Xu1 1 Department of Computer Science and Engineering, Michigan State University, East Lansing, MI 48824 2 Microsoft Cloud & AI, Redmond, WA 98052 |
| Pseudocode | No | The paper describes procedures and processes in prose but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statement or link indicating that the source code for its described methodology is publicly available. |
| Open Datasets | Yes | We used an existing dataset [Misra et al., 2015] to simulate different levels of noise of the environment and simulate interaction with a human teacher through question answering. |
| Dataset Splits | No | The paper mentions using a dataset for "model training" and refers to "a large amount of image data (effect) which is annotated with corresponding causes". However, it does not specify any training/validation/test splits, exact percentages, sample counts, or refer to predefined splits from cited works with specific details. |
| Hardware Specification | No | The paper mentions a "Baxter robot" as the physical agent being taught tasks, and notes that Figure 1c was "produced by YOLO". However, it does not specify any hardware details like CPU, GPU models, memory, or computational resources used for running the experiments or training the models. |
| Software Dependencies | No | The paper mentions "YOLO [Redmon and Farhadi, 2017]" as a tool used, but it does not provide specific version numbers for YOLO or any other software dependencies, libraries, or frameworks used in their experiments. |
| Experiment Setup | No | The paper describes various conceptual approaches and learning frameworks. However, it does not provide specific details about the experimental setup such as hyperparameter values (e.g., learning rates, batch sizes, epochs), optimizer settings, or other system-level training configurations. |