Learning Mixtures of Submodular Functions for Image Collection Summarization

Authors: Sebastian Tschiatschek, Rishabh K Iyer, Haochen Wei, Jeff A. Bilmes

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare our method with previous work on this problem and show that our learning approach outperforms all competitors on this new data set. We extensively validate our approach on these data sets, and show that it outperforms previously explored methods developed for similar problems. The resulting learnt objective, moreover, matches human summarization performance on test data. Table 1: Cross-Validation Experiments.
Researcher Affiliation Collaboration Sebastian Tschiatschek Department of Electrical Engineering Graz University of Technology tschiatschek@tugraz.at Rishabh Iyer Department of Electrical Engineering University of Washington rkiyer@u.washington.edu Haochen Wei Linked In & Department of Electrical Engineering University of Washington weihch90@gmail.com Jeff Bilmes Department of Electrical Engineering University of Washington bilmes@u.washington.edu
Pseudocode Yes Algorithm 1 Algorithm for pruning poor human-generated summaries.
Open Source Code No The paper states, 'One major contribution of our paper is our new data set which we plan soon to publicly release,' but does not provide an explicit statement or link for the open-source code of their methodology.
Open Datasets No One major contribution of our paper is our new data set which we plan soon to publicly release. Our data set consists of 14 image collections, each comprising 100 images. This statement indicates a future release, not current public access.
Dataset Splits Yes We considered two types of experiments: 1) cheating experiments to verify that our proposed mixture components can effectively learn good scoring functions; and 2) a 14-fold cross-validation experiment to test our approach in realworld scenarios.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions tools like 'Ada Grad [6]', 'VLFeat [33]', and 'Over Feat [27]' but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes For weight optimization, we used Ada Grad [6], an adaptive subgradient method allowing for informative gradient-based learning. We do 20 passes through the samples in the collection.