Optimal Continuous DR-Submodular Maximization and Applications to Provable Mean Field Inference
Authors: Yatao Bian, Joachim Buhmann, Andreas Krause
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate the superior performance of our algorithms with baseline results on both synthetic and real-world datasets. Extensive experimental evaluations on real-world and synthetic data support our theory. We tested on the representative FLID model the following algorithms and baselines: The first category is one-epoch algorithms, including 1 Submodular-Double Greedy (Bian et al., 2017c) with 1/3 approximation guarantee, 2 BSCB (Alg. 4 of Niazadeh et al. (2018), where we chose = 10 3) with 1/2 guarantee and 3 DR-Double Greedy (Alg. 1) with 1/2 guarantee. |
| Researcher Affiliation | Academia | 1Department of Computer Science, ETH Zurich, Zurich, Switzerland. Correspondence to: Yatao A. Bian <ybian@inf.ethz.ch>. |
| Pseudocode | Yes | Algorithm 1: DR-Double Greedy(f, a, b); Algorithm 2: DG-Mean Field-1/2 & DG-Mean Field-1/3 |
| Open Source Code | No | All algorithms are implemented in Python3, and source code will be public on the author s homepage. |
| Open Datasets | Yes | We tested the mean field methods on the trained FLID models from Tschiatschek et al. (2016) on Amazon Baby Registries dataset. After preprocessing, this dataset has 13 categories, e.g., feeding & furniture. |
| Dataset Splits | Yes | Split the training data into multiple folds, train a model on each fold D and infer a noisy posterior distribution p(S|D). For each category, three classes of models were trained, with latent dimensions D = 2, 3, 10, repectively, on 10 folds of the data. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running the experiments. |
| Software Dependencies | No | All algorithms are implemented in Python3, and source code will be public on the author s homepage. The paper mentions Python3 but does not specify version numbers for Python or any other libraries or software dependencies. |
| Experiment Setup | Yes | The objectives under investigation are ELBO (1) and PA-ELBO (2). We set β = 1 in PA-ELBO. For all algorithms, we use the same random order to process the coordinates within each epoch. For each category, three classes of models were trained, with latent dimensions D = 2, 3, 10, repectively, on 10 folds of the data. |