Deep Representation-Decoupling Neural Networks for Monaural Music Mixture Separation

Authors: Zhuo Li, Hongwei Wang, Miao Zhao, Wenjie Li, Minyi Guo

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments on real-world dataset, we prove that DRDNN outperforms state-of-the-art baselines in the task of monaural music mixture separation and reconstruction.
Researcher Affiliation Academia 1The Hong Kong Polytechnic University, {cszhuoli, csmiaozhao, cswjli}@comp.polyu.edu.hk 2Shanghai Jiao Tong University, wanghongwei55@gmail.com, myguo@sjtu.edu.cn
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement or link for the open-source code of the described methodology.
Open Datasets Yes We use the public Demixing Secrets Dataset 100 (DSD100) to evaluate our DRDNN model. This dataset is publicly released on the purpose of helping researchers and scholars to evaluate their source separation methods from music recordings. 1https://sisec.inria.fr/home/2016-professionally-producedmusic-recordings/
Dataset Splits No The paper states 'The dataset is partitioned into training set and test set with 9 : 1 ratio.' It does not mention a separate validation set.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions various models and transformations (e.g., STFT, CNN, LSTM, MLP) but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes In the experiment, we select Blackman-Harris window to perform STFT, which has large side lobes compared to other common windows. The window size is set to 1024 samples. For each window, we use the 1024-point fast fourier transform (1024-point FFT), and the hop-size H is set to 256. In spectrogram segmentation, the whole spectrogram are split into batches, and the length of each batch is 50 time frames, with 30% overlaps between two consecutive batches.