Lazy and Fast Greedy MAP Inference for Determinantal Point Process
Authors: Shinichi Hemmi, Taihei Oki, Shinsaku Sakaue, Kaito Fujii, Satoru Iwata
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments validate the effectiveness of our acceleration ideas. ... 5 Experiments We evaluate the effectiveness of our acceleration techniques on synthetic and real-world datasets. |
| Researcher Affiliation | Academia | Shinichi Hemmi The University of Tokyo, Tokyo, Japan hemmi.shinichi@gmail.com Taihei Oki The University of Tokyo, Tokyo, Japan oki@mist.i.u-tokyo.ac.jp Shinsaku Sakaue The University of Tokyo, Tokyo, Japan sakaue@mist.i.u-tokyo.ac.jp Kaito Fujii National Institute of Informatics, Tokyo, Japan fujiik@nii.ac.jp Satoru Iwata The University of Tokyo, Tokyo, Japan ICRe DD, Hokkaido University, Sapporo, Japan iwata@mist.i.u-tokyo.ac.jp |
| Pseudocode | Yes | Algorithm 1 FASTGREEDY [13] for cardinality-constrained DPP MAP inference ... Algorithm 2 LAZYFASTGREEDY for cardinality-constrained DPP MAP inference ... Algorithm 3 FASTDOUBLEGREEDY for unconstrained DPP MAP inference |
| Open Source Code | Yes | Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See the supplementary material. ... The codes of the algorithms are provided in the supplementary material. |
| Open Datasets | Yes | We use synthetic and two real-world datasets, Netflix Prize [5] and Movie Lens [25]. |
| Dataset Splits | No | No training is performed in our experiments. ... The paper evaluates algorithm running time and performance, not the training of a machine learning model, hence traditional training/validation/test splits are not applicable or specified. |
| Hardware Specification | Yes | Experiments are conducted using a compiler GCC 10.2.0 on a computer with 3.8 GHz Intel Xeon Gold CPU and 800 GB RAM. |
| Software Dependencies | Yes | The algorithms are implemented in C++ with library Eigen 3.4.0 for matrix computations. Experiments are conducted using a compiler GCC 10.2.0 |
| Experiment Setup | Yes | As regards synthetic datasets, we consider two settings that fix either n or k: (i) n = 6000 and k = 1, 2, . . . , n, and (ii) k = 200 and n = 1000, 2000, . . . , 10000. We set the timeout periods of (i) and (ii) to 3600 and 60 seconds, respectively. |