$k$-Sliced Mutual Information: A Quantitative Study of Scalability with Dimension
Authors: Ziv Goldfeld, Kristjan Greenewald, Theshani Nuradha, Galen Reeves
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | All our results trivially apply to SMI by setting k = 1. Our theory is validated with numerical experiments and is applied to sliced Info GAN, which altogether provide a comprehensive quantitative account of the scalability question of k-SMI, including SMI as a special case when k = 1. |
| Researcher Affiliation | Collaboration | Ziv Goldfeld Cornell University goldfeld@cornell.edu Kristjan Greenewald MIT-IBM Watson AI Lab IBM Research kristjan.h.greenewald@ibm.com Theshani Nuradha Cornell University pt388@cornell.edu Galen Reeves Duke University galen.reeves@duke.edu |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | Partially data and instructions included, and links to Info GAN experiment code; but we are not ready to release all codes at submission time. |
| Open Datasets | Yes | Figure 4(left) shows Info GAN results for MNIST, where 3 latent codes (C1, C2, C3) were used for disentanglement, with C1 being a 10-state discrete variable and (C2, C3) being continuous variables with values in [ 2, 2]. |
| Dataset Splits | No | The paper does not provide specific training/validation/test dataset splits needed to reproduce the experiment. |
| Hardware Specification | No | No large-scale experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | m parallel 3-layer Re LU NNs were used, each with 30 k hidden units in each layer. [...] A 3-layer Re LU NN was used with 20 d hidden units in each layer. |