Approximation Guarantees of Stochastic Greedy Algorithms for Subset Selection

Authors: Chao Qian, Yang Yu, Ke Tang

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This work proves their approximation guarantees in these cases, and thus largely extends the applicability of stochastic greedy algorithms. In this paper, we thus theoretically study the approximation performance of the stochastic version of the corresponding greedy algorithms in these cases.
Researcher Affiliation Academia 1 Anhui Province Key Lab of Big Data Analysis and Application, University of Science and Technology of China, Hefei 230027, China 2 National Key Lab for Novel Software Technology, Nanjing University, Nanjing 210023, China 3 Shenzhen Key Lab of Computational Intelligence, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
Pseudocode Yes Algorithm 1 STOCHASTIC-STANDARD-GREEDY Algorithm; Algorithm 2 STOCHASTIC-RANDOM-GREEDY Algorithm; Algorithm 3 STOCHASTIC-GENERAL-GREEDY Algorithm.
Open Source Code No The paper is theoretical and does not mention providing open-source code for the methodologies or algorithms discussed.
Open Datasets No The paper is theoretical and does not involve empirical experiments with datasets. Thus, no training dataset information is provided.
Dataset Splits No The paper is theoretical and does not involve empirical experiments with datasets. Thus, no dataset split information (including validation) is provided.
Hardware Specification No The paper is theoretical and does not describe any experimental setup or mention specific hardware used.
Software Dependencies No The paper is theoretical and does not mention any software dependencies with specific version numbers.
Experiment Setup No The paper is theoretical and does not describe an experimental setup, hyperparameters, or training configurations.