Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Lazy and Fast Greedy MAP Inference for Determinantal Point Process

Authors: Shinichi Hemmi, Taihei Oki, Shinsaku Sakaue, Kaito Fujii, Satoru Iwata

NeurIPS 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments validate the effectiveness of our acceleration ideas. ... 5 Experiments We evaluate the effectiveness of our acceleration techniques on synthetic and real-world datasets.
Researcher Affiliation Academia Shinichi Hemmi The University of Tokyo, Tokyo, Japan EMAIL Taihei Oki The University of Tokyo, Tokyo, Japan EMAIL Shinsaku Sakaue The University of Tokyo, Tokyo, Japan EMAIL Kaito Fujii National Institute of Informatics, Tokyo, Japan EMAIL Satoru Iwata The University of Tokyo, Tokyo, Japan ICRe DD, Hokkaido University, Sapporo, Japan EMAIL
Pseudocode Yes Algorithm 1 FASTGREEDY [13] for cardinality-constrained DPP MAP inference ... Algorithm 2 LAZYFASTGREEDY for cardinality-constrained DPP MAP inference ... Algorithm 3 FASTDOUBLEGREEDY for unconstrained DPP MAP inference
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See the supplementary material. ... The codes of the algorithms are provided in the supplementary material.
Open Datasets Yes We use synthetic and two real-world datasets, Netflix Prize [5] and Movie Lens [25].
Dataset Splits No No training is performed in our experiments. ... The paper evaluates algorithm running time and performance, not the training of a machine learning model, hence traditional training/validation/test splits are not applicable or specified.
Hardware Specification Yes Experiments are conducted using a compiler GCC 10.2.0 on a computer with 3.8 GHz Intel Xeon Gold CPU and 800 GB RAM.
Software Dependencies Yes The algorithms are implemented in C++ with library Eigen 3.4.0 for matrix computations. Experiments are conducted using a compiler GCC 10.2.0
Experiment Setup Yes As regards synthetic datasets, we consider two settings that fix either n or k: (i) n = 6000 and k = 1, 2, . . . , n, and (ii) k = 200 and n = 1000, 2000, . . . , 10000. We set the timeout periods of (i) and (ii) to 3600 and 60 seconds, respectively.