Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

$k$-Sliced Mutual Information: A Quantitative Study of Scalability with Dimension

Authors: Ziv Goldfeld, Kristjan Greenewald, Theshani Nuradha, Galen Reeves

NeurIPS 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental All our results trivially apply to SMI by setting k = 1. Our theory is validated with numerical experiments and is applied to sliced Info GAN, which altogether provide a comprehensive quantitative account of the scalability question of k-SMI, including SMI as a special case when k = 1.
Researcher Affiliation Collaboration Ziv Goldfeld Cornell University EMAIL Kristjan Greenewald MIT-IBM Watson AI Lab IBM Research EMAIL Theshani Nuradha Cornell University EMAIL Galen Reeves Duke University EMAIL
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No Partially data and instructions included, and links to Info GAN experiment code; but we are not ready to release all codes at submission time.
Open Datasets Yes Figure 4(left) shows Info GAN results for MNIST, where 3 latent codes (C1, C2, C3) were used for disentanglement, with C1 being a 10-state discrete variable and (C2, C3) being continuous variables with values in [ 2, 2].
Dataset Splits No The paper does not provide specific training/validation/test dataset splits needed to reproduce the experiment.
Hardware Specification No No large-scale experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes m parallel 3-layer Re LU NNs were used, each with 30 k hidden units in each layer. [...] A 3-layer Re LU NN was used with 20 d hidden units in each layer.