Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Computational Explorations of Total Variation Distance
Authors: Arnab Bhattacharyya, Sutanu Gayen, Kuldeep S. Meel, Dimitrios Myrisiotis, A. Pavan, N. V. Vinodchandran
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We investigate some previously unexplored (or underexplored) computational aspects of total variation (TV) distance. First, we give a simple deterministic polynomial-time algorithm for checking equivalence between mixtures of product distributions, over arbitrary alphabets. This corresponds to a special case, whereby the TV distance between the two distributions is zero. Second, we prove that unless NP RP it is impossible to efficiently estimate the TV distance between arbitrary Ising models, even in a bounded-error randomized setting. |
| Researcher Affiliation | Academia | Arnab Bhattacharyya University of Warwick Sutanu Gayen IIT Kanpur Kuldeep S. Meel University of Toronto Georgia Institute of Technology Dimitrios Myrisiotis CNRS@CREATE LTD. A. Pavan Iowa State University N. V. Vinodchandran University of Nebraska-Lincoln |
| Pseudocode | No | The paper describes algorithms (Algorithm E, Algorithm G) in prose, but does not present any formal pseudocode blocks or figures explicitly labeled as an algorithm. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code for the described methodology, nor does it provide links to any code repositories. |
| Open Datasets | No | This paper is theoretical and focuses on computational aspects of total variation distance. It does not use or refer to any specific datasets, thus there is no information regarding their availability. |
| Dataset Splits | No | This paper is theoretical and does not involve experiments with datasets, therefore there is no mention of dataset splits (training, validation, test). |
| Hardware Specification | No | This paper is theoretical and does not describe any experimental evaluations that would require specific hardware. No hardware specifications are provided. |
| Software Dependencies | No | This paper is theoretical and focuses on algorithms and proofs rather than implementations. It does not mention any specific software or library dependencies with version numbers. |
| Experiment Setup | No | This paper is theoretical and does not include any experimental evaluations. Therefore, no experimental setup details, hyperparameters, or training configurations are provided. |