Kinetic Imaginations: Exploring the Possibilities of Combining AI and Dance
Authors: Alexander Berman, Valencia James
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The results and discussions presented in the paper emanate from experiments performed in the interdisciplinary project AI am bringing together a dancer, researchers and creative technologists. |
| Researcher Affiliation | Academia | Alexander Berman, Valencia James. The paper does not explicitly state the institutional affiliations (university names, company names, or email domains) for the authors. Given it's a research paper presented at an academic conference (IJCAI), it's highly likely academic, but specific affiliations are not provided in the text. |
| Pseudocode | Yes | 1. Select a random observed pose in the map as the departure point p 2. Generate a set of destination candidates {qi}, where each candidate is a random observed pose plus a random vector of magnitude N (the novelty parameter) 3. Choose the destination q as the candidate among {qi} whose distance to p has the smallest difference from the preferred distance E (the extension parameter) 4. Choose some intermediate points between p and q, where each intermediate point lies between a point on the straight line between p and q and its nearest observation in the map (closer to the nearest observation for lower N values) 5. Smooth the resulting path using spline interpolation 6. Create the next trajectory by treating the current destination as a departure and repeating steps 2-5 |
| Open Source Code | No | A video showing an example of a generated motion sequence can be found at http://tiny.cc/kinetic-imag. The paper provides a link to a video demonstration but does not provide access to the source code for the methodology described. |
| Open Datasets | No | For the purposes of our experiments, a database was created by recording about 5 minutes of material. The moves were performed by the team’s dancer and were derived from poses that relate to certain spatial points, and the movement that arises from moving back and forth between these poses. The paper describes creating a custom dataset but does not provide any access information (link, DOI, citation) to make it publicly available. |
| Dataset Splits | No | For the purposes of our experiments, a database was created by recording about 5 minutes of material. The paper describes the creation of a dataset and its use for statistical modeling but does not specify any training, validation, or test dataset splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory, or cloud instances) used for running its experiments. |
| Software Dependencies | No | Secondly, the dimensionality of the data is reduced by nonlinear kernel principal component analysis (KPCA) [Sch olkopf et al., 1998]. The paper mentions using KPCA but does not list specific software names with version numbers for its implementation or other ancillary software dependencies. |
| Experiment Setup | Yes | The method allows movements to be generated and rendered graphically in real time in the form of a dancing avatar. By adjoining paths so that the endpoint of one trajectory becomes the departure for the subsequent one, a continuous stream of movement is obtained. ... The example stems from a pose map of 7 dimensions, a value which was reached experimentally. ... The avatar’s degree of novelty, extension and speed were manipulated and the effects of the parameter variations on the generated movement observed. |