Wasserstein Dependency Measure for Representation Learning
Authors: Sherjil Ozair, Corey Lynch, Yoshua Bengio, Aaron van den Oord, Sergey Levine, Pierre Sermanet
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this work, we empirically demonstrate that mutual information-based representation learning approaches do fail to learn complete representations on a number of designed and real-world tasks. |
| Researcher Affiliation | Collaboration | Sherjil Ozair Mila, Université de Montréal Corey Lynch Google Brain Yoshua Bengio Mila, Université de Montréal Aäron van den Oord Deepmind Sergey Levine Google Brain Pierre Sermanet Google Brain |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements or links to open-source code for the methodology described. |
| Open Datasets | Yes | We used the Omniglot dataset [31] as a base dataset to construct Spatial Multi Omniglot and Stacked Multi Omniglot. ... Shapes3D [27] (Figure 1) is a dataset of images of a single object in a room. ... Celeb A [34] is a dataset consisting of celebrity faces. |
| Dataset Splits | No | The paper mentions a fixed training dataset size (50,000 samples) but does not provide specific details for a separate validation split (e.g., percentages or sample counts). |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory). |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., libraries, frameworks, or specific tools). |
| Experiment Setup | No | While the paper discusses the effect of minibatch size, it does not provide concrete hyperparameter values or detailed system-level training configurations needed to reproduce the experiment setup. |