Learning to Uncover Deep Musical Structure

Authors: Phillip Kirlin, David Jensen

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This represents the first large-scale data-driven computational approach to hierarchical music analysis. We use the largest set of encoded music analyses in existence to develop a probabilistic model of the rules of music analysis, deploy this model in an algorithm that can identify the most likely analysis for a piece of music, and evaluate the system using multiple metrics, including a study with human experts comparing the algorithmic output headto-head against published analyses from textbooks. We now evaluate the quality of our probabilistic model of music analysis by studying the analyses that the PARSEMOP algorithms produce for the music in the SCHENKER41 corpus.
Researcher Affiliation Academia Phillip B. Kirlin Department of Mathematics and Computer Science Rhodes College Memphis, Tennessee 38112 David D. Jensen School of Computer Science University of Massachusetts Amherst Amherst, Massachusetts 01003
Pseudocode No The paper describes the PARSEMOP algorithm conceptually and its relation to the CYK algorithm, but it does not include a pseudocode block or a clearly labeled algorithm section.
Open Source Code No The paper does not contain any statement about making the source code for the described methodology publicly available, nor does it provide a link to a code repository.
Open Datasets Yes In particular, we use the SCHENKER41 data set (Kirlin 2014a), which contains 41 excerpts of music and a corresponding Schenkerian analysis for each excerpt.
Dataset Splits Yes Due to the difficulty of finding additional musical excerpts with corresponding analyses to use as a testing data set, coupled with the small size of the corpus (41 musical excerpts), we used a leave-out-one cross-validation approach for training and testing in these experiments.
Hardware Specification No The paper does not provide any specific details regarding the hardware used for running the experiments (e.g., CPU, GPU models, memory, or cloud instance types).
Software Dependencies No The paper mentions the use of 'random forests' and cites related work (Breiman 2001, Provost and Domingos 2003) and specifies parameters like '1,000 trees with a maximum depth of four', but it does not list any specific software dependencies with version numbers.
Experiment Setup Yes Random forests can be customized by controlling the number of trees in each forest, how many features are used per tree, and each tree s maximum depth. We use forests containing 1,000 trees with a maximum depth of four. We used Breiman s original idea of choosing a random selection of m = int(log2 M + 1) features to construct each individual tree in the forest, where M is the total number of features available to us. In our case, M = 16, so m = 5.