Natural Language Acquisition and Grounding for Embodied Robotic Systems
Authors: Muhannad Alomari, Paul Duckworth, David Hogg, Anthony Cohn
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the performance of our system using two datasets, a synthetic dataset and a simple real-world setup. and Table 2: Results of learning the n-grams visual representations from two different datasets. |
| Researcher Affiliation | Academia | Muhannad Alomari, Paul Duckworth, David C. Hogg and Anthony G. Cohn School of Computing, University of Leeds, Leeds, UK (scmara, p.duckworth, d.c.hogg, a.g.cohn)@leeds.ac.uk |
| Pseudocode | No | The paper describes the learning framework in high-level steps but does not provide any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper provides links to datasets and mentions a software library developed by the authors (QSRlib) but does not state that the code for the methodology described in this paper is publicly available. |
| Open Datasets | Yes | The extended version is published at Alomari et al. (2016) http://doi.org/10.5518/32 and We collected a dataset (http://doi.org/10.5518/110) consisting of 160 videos in which volunteers controlled the robot s arms, and manipulated real objects. |
| Dataset Splits | No | For the synthetic world... We kept 200 videos and 1343 commands as our testing dataset. and A further 40 new videos along with 40 new commands were collected and used as a test set which include new objects which were not present in the training set. The paper explicitly states test set sizes but does not provide complete training/validation/test splits, nor does it mention a validation set. |
| Hardware Specification | No | For the real-world setup, we used a Baxter robot as our test platform and attached a Microsoft Kinect2 sensor to its chest, as shown in Figure 1. This describes the robot and sensor used for data collection, not the computational hardware used for running the experiments or training the models. No specific CPU, GPU, or memory details are provided. |
| Software Dependencies | No | The paper mentions software like tabletop object detector ROS Wiki and particle filter but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | No | The paper describes the learning process and model components but does not provide specific details on experimental setup such as hyperparameters (e.g., learning rate, batch size, epochs) or optimizer settings. |