Estimators for Multivariate Information Measures in General Probability Spaces

Authors: Arman Rahimzamani, Himanshu Asnani, Pramod Viswanath, Sreeram Kannan

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that our proposed estimators significantly outperform known estimators on synthetic and real datasets.
Researcher Affiliation Academia Arman Rahimzamani Department of ECE University of Washington armanrz@uw.edu Himanshu Asnani Department of ECE University of Washington asnani@uw.edu Pramod Viswanath Department of ECE University of Illinois at Urbana-Champaign pramodv@illinois.edu Sreeram Kannan Department of ECE University of Washington ksreeram@uw.edu
Pseudocode Yes Algorithm 1: Estimating Graph Divergence Measure GDM(X, G)
Open Source Code No The paper does not include any explicit statement about releasing source code or provide a link to a code repository for its methodology.
Open Datasets No The paper primarily uses simulated data or data generated based on models (e.g., "simulated an X-Z-Y Markov chain model", "considered an Additive White Gaussian Noise (AWGN) Channel in parallel with a Binary Symmetric Channel (BSC)", "samples for the variable X are generated in the following fashion", "simulated the development process for various lengths of time-series"). For the gene regulatory network experiment, it states "based on a model from [52]" but doesn't provide a link or direct access to a public dataset.
Dataset Splits No The paper does not provide specific dataset split information (percentages, sample counts, or references to predefined splits) for training, validation, or testing. It primarily evaluates performance with varying numbers of total samples.
Hardware Specification No The paper does not specify any hardware details (e.g., CPU, GPU models, or memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies or their version numbers. It mentions that "A more detailed discussion can be found in Section G.", but Section G is not provided in the document.
Experiment Setup No While the paper describes parameters for data generation in its experiments (e.g., "setting α1 = 0.9, α2 = 0.8 and α3 = 0.7"), it does not provide specific hyperparameter values or system-level training settings for the estimators themselves (e.g., the concrete value of 'k' used in Algorithm 1 for the numerical results, or parameters for baseline estimators like KSG).