Knowledge Compilation Meets Communication Complexity

Authors: Simone Bova, Florent Capelli, Stefan Mengel, Friedrich Slivovsky

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we introduce a general technique for obtaining lower bounds on Decomposable Negation Normal Form (DNNFs), one of the most widely studied and succinct representation languages, by relating the size of DNNFs to multi-partition communication complexity. This allows us to directly translate lower bounds from the communication complexity literature into lower bounds on the size of DNNF representations. We use this approach to prove exponential separations of DNNFs from deterministic DNNFs and of CNF formulas from DNNFs.
Researcher Affiliation Academia Simone Bova Florent Capelli Paris 7, IMJ-PRG Stefan Mengel CNRS, CRIL UMR 8188 Friedrich Slivovsky
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks. It focuses on theoretical definitions and proofs.
Open Source Code No The paper does not provide any concrete access to source code for the methodology described.
Open Datasets No This is a theoretical paper focusing on mathematical proofs and lower bounds for functions (e.g., Sauerhoff function Sn), not empirical experiments requiring datasets or training.
Dataset Splits No This is a theoretical paper that does not involve empirical experiments with datasets, so dataset splits (train/validation/test) are not applicable.
Hardware Specification No This is a theoretical paper that focuses on mathematical proofs and concepts, and as such, no hardware specifications for running experiments are mentioned.
Software Dependencies No This is a theoretical paper focused on mathematical proofs; thus, it does not describe specific software dependencies with version numbers for experimental reproducibility.
Experiment Setup No This is a theoretical paper and does not include details about an experimental setup, such as hyperparameters or training configurations.