Unsupervised Noise Adaptive Speech Enhancement by Discriminator-Constrained Optimal Transport

Authors: Hsin-Yi Lin, Huan-Hsin Tseng, Xugang Lu, Yu Tsao

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results on two SE tasks demonstrate that by extending the classical OT formulation, our proposed DOTN outperforms previous adversarial domain adaptation frameworks in a purely unsupervised manner. Our approach was applied to two standardized SE tasks, namely Voice Bank-DEMAND and TIMIT, and achieved superior adaptation performance in terms of both Perceptual Evaluation of Speech Quality (PESQ) and Short-Time Objective Intelligibility (STOI) scores.
Researcher Affiliation Academia Hsin-Yi Lin The Cooperative Institute for Research in Environmental Sciences (CIRES) University of Colorado, Boulder, CO, 80309, USA NOAA Physical Sciences Laboratory, Boulder, CO, 80305, USA hylin@colorado.edu Huan-Hsin Tseng Research Center for Information Technology Innovation Academia Sinica, Taiwan htseng@citi.sinica.edu.tw Xugang Lu National Institute of Information and Communications Technology (NICT), Japan xugang.lu@nict.go.jp Yu Tsao Research Center for Information Technology Innovation Academia Sinica, Taiwan yu.tsao@citi.sinica.edu.tw
Pseudocode Yes Algorithm 1 DOTN, proposed algorithm
Open Source Code Yes The implementations are summarized in the supplementary material and codes are available on Github1. 1https://github.com/hsinyilin19/Discriminator-Constrained-Optimal-Transport-Network
Open Datasets Yes We evaluated our method in SE on two datasets: Voice Bank corpus [43] and TIMIT [44].
Dataset Splits No The paper describes 'source domain data' and 'target domain data' which are used in the training/adaptation process and for evaluation, but it does not specify a separate 'validation' set or explicit percentages for training/validation/test splits beyond describing the composition of source and target domains.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as specific GPU or CPU models, or memory specifications.
Software Dependencies No The paper mentions that implementations are summarized in supplementary material and code is available, but it does not list specific software dependencies with their version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup No Algorithm 1 lists parameters such as 'c', 'm', 'nf', 'nh', and 'ns', which are relevant to the experimental setup. However, the concrete values for these hyperparameters are not provided in the main text; instead, the paper states that 'The implementations are summarized in the supplementary material'.