Multimodal Generative Learning Utilizing Jensen-Shannon-Divergence

Authors: Thomas Sutter, Imant Daunhawer, Julia Vogt

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In extensive experiments, we demonstrate the advantage of the proposed mm JSD model compared to previous work in unsupervised, generative learning tasks.
Researcher Affiliation Academia Thomas M. Sutter, Imant Daunhawer, Julia E. Vogt Department of Computer Science ETH Zurich {thomas.sutter,imant.daunhawer,julia.vogt}@inf.ethz.ch
Pseudocode No The paper presents mathematical definitions and derivations (e.g., Definition 1, Lemma 1, Lemma 2) but does not include any blocks explicitly labeled 'Pseudocode' or 'Algorithm'.
Open Source Code No Footnote 2 states 'The code for our experiments can be found here.' However, 'here' does not provide a concrete, specific URL or repository link within the text of the PDF to access the source code.
Open Datasets Yes For the experiment we use a matching digits dataset consisting of MNIST [14] and SVHN [17] images with an additional text modality [22]. The second experiment is carried out on the challenging Celeb A faces dataset [16] with additional text describing the attributes of the shown face.
Dataset Splits No The paper mentions 'unimodal training set' in Section 4.1, but does not provide specific percentages or counts for training, validation, or test splits, nor does it refer to a predefined standard split for the datasets used in combination.
Hardware Specification No The paper states 'Implementation details for all experiments together with a comparison of runtimes can be found in Appendix C.' However, no specific hardware components like GPU or CPU models, or cloud computing instances, are detailed in the main body of the paper.
Software Dependencies No The paper does not explicitly list any software dependencies with specific version numbers (e.g., Python 3.x, PyTorch 1.x) that would be needed to reproduce the experiments.
Experiment Setup No The paper mentions 'Implementation details for all experiments together with a comparison of runtimes can be found in Appendix C.' However, the main text does not include specific hyperparameters (e.g., learning rate, batch size, epochs) or detailed training configurations.