A Machine Learning Approach to Musically Meaningful Homogeneous Style Classification

Authors: William Herlands, Ricky Der, Yoel Greenberg, Simon Levin

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present a supervised machine learning system which addresses the difficulty of differentiating between stylistically homogeneous composers using foundational elements of music, their complexity and interaction. Our work expands on previous style classification studies by developing more complex features as well as introducing a new class of musical features which focus on local irregularities within musical scores. We demonstrate the discriminative power of the system as applied to Haydn and Mozart s string quartets. Our results yield interpretable musicological conclusions about Haydn s and Mozart s stylistic differences while distinguishing between the composers with higher accuracy than previous studies in this domain.
Researcher Affiliation Academia William Herlands Electrical Engineering Princeton University Princeton, NJ 08544 herlands@princeton.edu Ricky Der Dept. of Mathematics University of Pennsylvania Philadelphia, PA 19104 rickyder@sas.upenn.edu Yoel Greenberg Dept. of Music Bar-Ilan University Ramat Gan, Israel yoel.greenberg@biu.ac.il Simon Levin Ecology & Evln. Biology Princeton University Princeton, NJ 08544 slevin@princeton.edu
Pseudocode No The paper describes its methods in narrative text and includes mathematical formulas, but it does not present any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about releasing source code for the described methodology or a link to a code repository.
Open Datasets No Data were provided by Music21 (Cuthbert and Ariza 2010). Due to the limited number of Haydn s and Mozart s scores as well as limitations in the Music21 database, we used 49 of Haydn s and all 23 of Mozart s string quartets. Additionally, we included 2 Mozart flute quartets to increase our sample size.
Dataset Splits Yes The dataset was balanced by under-sampling Haydn to choose 25 quartets for each run. Training and testing sets were formed by 80/20 cross validation. Averaging the results for the cross validation over all runs provided our results. Feature selection was performed by correlating feature values (generally real-valued) with the class label over the training data. Using the training data, an inner cross-validation loop further subdivided the data into smaller sets. On these smaller validation sets, five standard classifiers were iteratively trained and tested on the reduced feature set, in order to select an optimal classifier.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory, or processing units) used to run the experiments.
Software Dependencies No Experiments used the scikit-learn package (Pedregosa et al. 2011).
Experiment Setup No The paper describes the classification methodology and the types of classifiers used, as well as the feature selection process and cross-validation strategy, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, epochs) or detailed system-level training settings.