Dual Track Multimodal Automatic Learning through Human-Robot Interaction
Authors: Shuqiang Jiang, Weiqing Min, Xue Li, Huayang Wang, Jian Sun, Jiaqi Zhou
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We have conducted the experiments through human-machine interaction and the experimental results validated the effectiveness of our proposed system. |
| Researcher Affiliation | Academia | 1Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, CAS, China 2University of Chinese Academy of Sciences, China 3Shandong University of Science and Technology, China |
| Pseudocode | No | The paper includes mathematical equations and descriptive text, but no distinct pseudocode or algorithm blocks are provided. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing its source code or a link to a code repository for the described methodology. |
| Open Datasets | No | The paper describes collecting its own data during interaction ('We use 13 kinds of hand-held objects') and training models on it ('capture 60 images per class as the training data'), but it does not provide access information (link, DOI, specific citation) for this dataset or state that it is publicly available. |
| Dataset Splits | No | The paper describes training and testing procedures, including 'capture 60 images per class as the training data' and 'take 6 images as the training data', and mentions a 'test dataset', but it does not specify a separate validation set or detailed train/test/validation splits required for reproduction. |
| Hardware Specification | Yes | Our system is run on a personal computer with an Intel Core (4 CPU) and 3.1GHz processor. We select the Kinect-1.0 device to capture objects and GPU with NVIDIA Ge Force GTX 770 to extract deep features. The camera with Logitech HD 720P is used to capture the face. |
| Software Dependencies | No | The paper mentions specific tools like VIPLFace Net, Easy CCG, Alex Net, and VGG features but does not provide specific version numbers for these or any other software dependencies needed for reproduction. |
| Experiment Setup | Yes | For class-incremental learning, we randomly select 3 classes as the source concepts to train a 3-class source model. ... For each time, we capture 60 images per class as the training data for each incremental learning. ... For data-incremental learning, ... For each time, we take 6 images as the training data for each incremental learning. |