Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Independency Adversarial Learning for Cross-Modal Sound Separation
Authors: Zhenkai Lin, Yanli Ji, Yang Yang
AAAI 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments certify that our approach outperforms existing approaches in supervised and unsupervised scenarios. ... Experiments Datasets and Evaluation Metrics ... Comparison with SOTA Approaches ... Ablation Study |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, UESTC, China 2Shenzhen Institute for Advanced Study, UESTC, China 3Institute of Electronic and Information Engineering of UESTC in Guangdong, China |
| Pseudocode | No | The paper describes the approach and its components but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code of the described methodology. |
| Open Datasets | Yes | We evaluate our proposed approach on three datasets, MUSIC, VGGSound, and Audio Set. MUSIC (Zhao et al. 2018) ... VGGSound (Chen et al. 2020) ... Audio Set (Gemmeke et al. 2017) |
| Dataset Splits | No | The paper mentions datasets like MUSICSolo and Synthetic-Duet used for evaluation but does not specify exact training, validation, and test split percentages or sample counts for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models or other system specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions 'mir eval library (Raffel et al. 2014)' but does not provide specific version numbers for this or any other software dependencies. |
| Experiment Setup | No | The paper describes the training process and loss functions (LD, LG, Lsep) but does not provide specific hyperparameter values such as learning rate, batch size, or number of epochs, nor specific optimizer settings. |