Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Some Inapproximability Results of MAP Inference and Exponentiated Determinantal Point Processes
Authors: Naoto Ohsaka
JAIR 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We study the computational complexity of two hard problems on determinantal point processes (DPPs). One is maximum a posteriori (MAP) inference, i.e., to find a principal submatrix having the maximum determinant. The other is probabilistic inference on exponentiated DPPs (E-DPPs), which can sharpen or weaken the diversity preference of DPPs with an exponent parameter p. We present several complexity-theoretic hardness results that explain the difficulty in approximating MAP inference and the normalizing constant for E-DPPs. We first prove that unconstrained MAP inference for an n n matrix is NP-hard to approximate within a factor of 2βn, where β = 10 1013. |
| Researcher Affiliation | Industry | Naoto Ohsaka EMAIL Cyber Agent, Inc. 40-1 Udagawacho Shibuya-ku, Tokyo, Japan |
| Pseudocode | No | The paper describes mathematical constructions and proofs, such as lemmas and theorems, but does not include any sections explicitly labeled "Pseudocode" or "Algorithm", nor does it present structured steps in a code-like format. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code, providing links to code repositories, or mentioning code availability in supplementary materials for the described methodology. |
| Open Datasets | No | This is a theoretical paper focusing on computational complexity and mathematical proofs related to determinantal point processes. It does not involve empirical experiments using specific datasets, and therefore, no datasets are mentioned as publicly available. |
| Dataset Splits | No | As a theoretical paper, it does not involve empirical experiments with datasets, and thus, there is no mention of dataset splits for training, validation, or testing. |
| Hardware Specification | No | The paper is theoretical and focuses on computational complexity and mathematical proofs. It does not describe any experimental setup, and therefore, no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical, dealing with computational complexity and mathematical proofs. It does not describe any experimental implementation or specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and focuses on computational complexity and mathematical proofs. It does not involve any experimental setup with hyperparameters, training configurations, or system-level settings. |