Robustness in Multi-Objective Submodular Optimization: a Quantile Approach

Authors: Cedric Malherbe, Kevin Scaman

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we provide numerical experiments showing the efficiency of our algorithm with regards to state-of-the-art methods in a test bed of real-world applications, and show that SOFTSAT is particularly robust and well-suited to online scenarios. and 5. Numerical Eexperiments In this section, we compare the empirical performance of the SOFTSAT algorithm.
Researcher Affiliation Collaboration 1Huawei Noah s Ark Lab 2DI ENS, Ecole normale sup erieure, CNRS, INRIA, PSL University 3This work was done while the author was working at Huawei.
Pseudocode Yes Algorithm 1 SOFTSAT
Open Source Code No An implementation of the algorithms can be found in the Appendix. (However, the appendix only contains pseudocode, not actual code or links.)
Open Datasets Yes In practice, we took the BBC news data set (Greene & Cunningham, 2006) that contains n = 2250 articles... In practice, we used the pokemon dataset (Churchill, 2017) that contains n = d = 151 images... In practice, we used the USAirport 500 dataset (Colizza et al., 2007) which is a graph that contains the 500 largest commercial airports in the United States.
Dataset Splits No The paper mentions using the BBC news data set, pokemon dataset, and USAirport 500 dataset for experiments, and varies parameters K and p, but it does not specify how these datasets are split into training, validation, and test sets with explicit percentages or counts.
Hardware Specification No The paper does not specify any particular hardware (GPU/CPU models, memory amounts, or specific computer specifications) used for running the experiments.
Software Dependencies No The paper does not list any specific software dependencies with version numbers (e.g., programming languages, libraries, frameworks) used for the experiments.
Experiment Setup No The paper discusses the choice of parameter 's' (grid search or adaptive estimation) and sets 'alpha' to 1 for the SATURATE algorithm, but it does not provide comprehensive experimental setup details such as learning rates, batch sizes, specific optimizers, or other hyperparameters typically found in deep learning or optimization experiments.