Synthesizing Robotic Handwriting Motion by Learning from Human Demonstrations

Authors: Hang Yin, PatrĂ­cia Alves-Oliveira, Francisco S. Melo, Aude Billard, Ana Paiva

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The paper begins by situating our work amongst related literatures in Section 2. The proposed approaches are then developed in detail in Section 3. We discuss the learning and synthesis results in Section 4, and in particular a Turing-like test to validate the human-likeness of generated motion in Section 5.
Researcher Affiliation Academia 1GAIPS, INESC-ID and Instituto Superior T ecnico, Universidade de Lisboa 2Learning Algorithms and Systems Laboratory, Ecole Polytechnique F ed erale de Lausanne 3Instituto Universitrio de Lisboa (ISCTE-IUL), CIS-IUL, Lisboa, Portugal and INESC-ID
Pseudocode Yes Algorithm 1 Random Sub Space Partitioning dataset through feature bagging
Open Source Code No The paper does not provide any explicit statements about making its source code publicly available, nor does it provide links to a code repository.
Open Datasets Yes The dataset employed was the UJI Pen Characters repository [Llorens et al., 2008], which contains online handwriting samples collected from 60 adult subjects.
Dataset Splits No The paper uses the UJI Pen Characters repository but does not explicitly provide details about specific training, validation, or test splits, or the methodology for such splits.
Hardware Specification No The paper mentions that training was performed 'on a modern laptop' but does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts.
Software Dependencies No The paper mentions the use of 'off-shelf packages such as [Pedregosa et al., 2011]' (scikit-learn) but does not provide specific version numbers for any software dependencies.
Experiment Setup No The paper describes the algorithms and overall framework, but it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rates, batch sizes, number of epochs, optimizer settings) or explicit configuration parameters for reproducibility.