Gradient Methods for Submodular Maximization
Authors: Hamed Hassani, Mahdi Soltanolkotabi, Amin Karbasi
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our experiments, we consider a movie recommendation application [19] consisting of N users and n movies. ... Figure 1 depicts the performance of different algorithms for the two proposed objective functions. |
| Researcher Affiliation | Academia | Hamed Hassani ESE Department University of Pennsylvania Philadelphia, PA hassani@seas.upenn.edu Mahdi Soltanolkotabi EE Department University of Southern California Los Angeles, CA soltanol@usc.edu Amin Karbasi ECE Department Yale University New Haven, CT amin.karbasi@yale.edu |
| Pseudocode | Yes | Algorithm 1 (Stochastic) Gradient Method for Maximizing F(x) over a convex set K Parameters: Integer T > 0 and scalars ηt > 0, t [T] Initialize: x1 K for t = 1 to T do yt+1 xt + ηtgt, where gt is a random vector s.t. E[gt xt] = F(xt) xt+1 = arg minx K x yt+1 2 end for Pick τ uniformly at random from {1,2,...,T}. Output xτ |
| Open Source Code | No | The paper does not explicitly state that the code for the methodology is open-source or provide a link to a repository. |
| Open Datasets | Yes | To run the experiments we use the Movie Lens data set. It consists of 1 million ratings (from 1 to 5) by N = 6041 users for n = 4000 movies. |
| Dataset Splits | No | The paper describes the Movie Lens dataset but does not provide specific details on training, validation, and test dataset splits. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used to run its experiments, only mentioning computational effort generally. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers for replication. |
| Experiment Setup | Yes | In our experiments, we consider a movie recommendation application... (i) Stochastic Gradient Ascent (SG): We use the step size µt = c/ t and mini-batch size B. ... (ii) Frank-Wolfe (FW) variant of [16]: We use T to denote the total number of iterations and B to denote mini-batch sizes (we further let α = 1,δ = 0... (iii) Batch-mode Greedy (Greedy): ...B random users are picked... Figures 1a and 1c show... T = 2000 iterations. (b) shows... k = 20... SG with B = 20, c = 1... SG with B = 20, c = 10 |