Generating Music Medleys via Playing Music Puzzle Games
Authors: Yu-Siang Huang, Szu-Yu Chou, Yi-Hsuan Yang
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our result shows that the resulting model, dubbed as the similarity embedding network (SEN), performs better than competing models across different games, including music jigsaw puzzle, music sequencing, and music medley. Example results can be found at our project website, https://remyhuang.github.io/DJnet. Result on 3-Piece Fixed-length Jigsaw Puzzle, Table 2: The pairwise accuracy and global accuracy on n = 3 fixed-length jigsaw puzzle., Table 3: The accuracy on music jigsaw puzzles with different segmentation method (fixed-length or downbeat informed) and different number of fragments., Table 4: The accuracy of SEN on three kinds of puzzle game for two segmentation methods., Table 5: The result of a few ablated version of SEN for different music puzzle games. |
| Researcher Affiliation | Academia | Research Center for IT innovation, Academia Sinica, Taiwan Graduate Institute of Networking and Multimedia, National Taiwan University, Taiwan |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | Example results can be found at our project website, https://remyhuang.github.io/DJnet. (This link is for results/examples, not explicitly stated to contain the source code for the methodology). |
| Open Datasets | Yes | For music sequencing, we use the popular music subset of the RWC database (Goto et al. 2002), which contains 100 complete songs with manually labeled section boundaries (Goto 2006).3 |
| Dataset Splits | Yes | Moreover, we randomly pick 6,000 songs as validation set, 6,000 songs for testing, and the remaining 19,377 clips for training. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | To this end, we use the implementation of a stateof-the-art recurrent neural network available in the Python library madmom4 (B ock et al. 2016). (No version specified for madmom or Python). |
| Experiment Setup | Yes | All networks use rectified linear unit (Re LU) as the activation function everywhere. Lastly, all the models are trained using stochastic gradient descent with momentum 0.9, with batch size setting to 16. |