Entropy Estimation via Normalizing Flow

Authors: Ziqiao Ao, Jinglai Li9990-9998

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments demonstrate the effectiveness of the method for high-dimensional entropy estimation problems.
Researcher Affiliation Academia School of Mathematics, University of Birmingham {zxa029, j.li.10}@bham.ac.uk
Pseudocode No The paper refers to 'Algorithm 1' in the text but does not include a visible pseudocode or algorithm block within the provided content.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository.
Open Datasets No The paper conducts experiments on synthetic data generated from specified distributions (e.g., standard multivariate normal, Rosenbrock distributions) rather than using a pre-existing, publicly available dataset with concrete access information.
Dataset Splits No The paper mentions splitting samples for UM construction and entropy estimation but does not provide specific details on train/validation/test splits (e.g., percentages, sample counts, or citations to predefined splits) needed for reproduction.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory, or cloud instances).
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or frameworks used).
Experiment Setup Yes To validate the idea of UM based entropy estimator, a natural question to ask is that how it works with a perfect NF transformation, that yields exactly normally distributed samples. To answer this question, we first conduct the numerical tests with the standard multivariate normal distribution, corresponding to the situation that one has done a perfect NF. Specifically we test the four methods: KL, KSG, UM-t KL and UM-t KSG, and we conduct two sets of tests: in the first one we fix the sample size to be 1000 and vary the dimensionality, while in the second one we fix the dimensionality to be 40 and vary the sample size. All the tests are repeated 100 times and the Root-mean-square-error (RMSE) of the estimates are calculated.