Stochastic Continuous Submodular Maximization: Boosting via Non-oblivious Function

Authors: Qixin Zhang, Zengde Deng, Zaiyi Chen, Haoyuan Hu, Yu Yang

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, numerical experiments demonstrate the effectiveness of our boosting methods.
Researcher Affiliation Collaboration 1School of Data Science City University of Hong Kong, Kowloon Hong Kong, China 2 Cainiao Network, Hang Zhou, China.
Pseudocode Yes Algorithm 1 Meta Boosting Protocol ... Algorithm 2 Boosting Gradient Ascent ... Algorithm 3 Online Boosting Delayed Gradient Ascent
Open Source Code No The paper does not provide an explicit statement or link to open-source code for the described methodology.
Open Datasets Yes Hassani et al. (2017) introduced a special continuous DRsubmodular function fk coming from the multilinear extension of a set cover function. ... Following (Bian et al., 2017), we choose the matrix H ∈ R^n×n to be a randomly generated symmetric matrix with entries uniformly distributed in [−1, 0], and the matrix A to be a random matrix with entries uniformly distributed in [0, 1].
Dataset Splits No The paper does not explicitly mention the use of a validation set or specific data splits for training, validation, and testing.
Hardware Specification No The paper does not specify any hardware used for running the experiments.
Software Dependencies No The paper does not specify software dependencies with version numbers.
Experiment Setup Yes In our experiment, we set k = 15 and consider a standard Gaussian noise, i.e., e~f(x) = f(x) + N(0, 1). ... We set b = u = 1, m = 12, and n = 25. To ensure the monotonicity, we set h = H^T u. ... Similarly, we also consider the Gaussian noise for gradient, i.e., e~f(x) = f(x) + δN(0, 1). We consider δ = 5 and start all algorithms from the origin. ... We also add the Gaussian noise for the gradient of each ft, i.e., e~ft(x) = ft(x) + δN(0, 1) with δ = 5. To simulate the feedback delays, we generate a uniform random number dt from {1, 2, 3, 4, 5} for the stochastic gradient information of ft. ... with step size ηt = 1/√t. ... with ρt = 1/(t+3)^2/3 ... where we set the minibatch size |M0| = T^2 and |M| = T for T-round iterations. ... with step size 1/√T. ... with the step size ηt = 1/√T and use the average of B independent samples to estimate the gradient at each round.