Neural Separation of Observed and Unobserved Distributions

Authors: Tavi Halperin, Ariel Ephrat, Yedid Hoshen

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on audio and image separation tasks show that our method outperforms current methods that use the same level of supervision, and often achieves similar performance to full supervision.
Researcher Affiliation Collaboration 1Department of Computer Science, The Hebrew University of Jerusalem, Jerusalem, Israel 2Google Research 3Facebook AI Research.
Pseudocode Yes Algorithm 1 Neural Egg Separation (NES)
Open Source Code No The paper does not explicitly state that source code for the methodology is provided or include a link to a code repository.
Open Datasets Yes We split the MNIST dataset (Le Cun & Cortes, 2010) [...] Handbags (Zhu et al., 2016) and Shoes (Yu & Grauman, 2014) datasets [...] Oxford-BBC Lip Reading in the Wild (LRW) Dataset (Chung & Zisserman, 2016) [...] ESC-50 (Piczak, 2015) [...] MUSDB18 Dataset (Rafii et al., 2017)
Dataset Splits No The paper describes training and testing sets, for instance, for MNIST: 'We use 12k B training images as the B training set, while for each of the other 12k B training images, we randomly sample a X image and additively combine the images to create the Y training set. We evaluate the performance of our method on 5000 Y images similarly created from the test set of X and B.' However, it does not explicitly define a validation set or a comprehensive train/validation/test split methodology for all experiments.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU or CPU models, memory, or cloud instance types) used for running the experiments.
Software Dependencies No The paper does not specify software dependencies with version numbers.
Experiment Setup Yes For optimization, we use SGD using ADAM update with a learning rate of 0.001. In total we perform N = 10 iterations, each consisting of optimization of T and estimation of xt, P = 25 epochs are used for each optimization of Eq. 3.