Efficient Online Learning of Optimal Rankings: Dimensionality Reduction via Gradient Descent
Authors: Dimitris Fotakis, Thanasis Lianeas, Georgios Piliouras, Stratis Skoulakis
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experimental Evaluations In this section we provide experimental evaluations of all the proposed online learning algorithms (both deterministic and randomized) for Min-Sum Set Cover. |
| Researcher Affiliation | Academia | Dimitris Fotakis National Technical University of Athens fotakis@cs.ntua.gr Thanasis Lianeas National Technical University of Athens lianeas@corelab.ntua.gr Georgos Piliouras Singapore Univ. of Technology & Design georgios@sutd.edu.sg Stratis Skoulakis Singapore Univ. of Technology & Design sskoul@sutd.edu.sg |
| Pseudocode | Yes | Algorithm 1 Online Projected Gradient Decent in Doubly Stochastic Matrices Algorithm 2 Converting Doubly Stochastic Matrices to Permutations |
| Open Source Code | Yes | The code used for the presented simulations can be found at https://github.com/sskoul/ID2216. |
| Open Datasets | No | The paper describes how synthetic data was generated for the experiments ('each request contains either element 1 or 2 and four additional randomly selected elements') but does not specify the use of a publicly available dataset with concrete access information (link, citation, or repository). |
| Dataset Splits | No | The paper does not provide explicit details about training, validation, or test dataset splits; it describes an online learning setting where requests appear sequentially. |
| Hardware Specification | No | The paper does not explicitly describe the hardware specifications used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | Algorithm 1 specifies the initial matrix selection: 'Initially, the player selects the matrix A1 = 1/n 1n n.' and a step-size: 'step-size D/(G√t), where D and G are upper bounds on the diameter of the action space and on the Euclidean norm of the subgradients. In our case, the action space is the set of doubly stochastic matrices. Since max A,B DS A B F 2 n the parameter D = √2n, and G = n5/ϵ, by Lemma 1. Hence, our step-size is √2ϵ/(n4.5√t).' The experimental evaluations section also describes the generation of requests: 'In the left figure each request contains either element 1 or 2 and four additional randomly selected elements, while in the right figure each request contains one of the elements {1, 2, 3, 4, 5} and nine more randomly selected elements.' It further notes 'we solve the optimization problem of Step 3 in Algorithm 2 through a simple heuristic that we present in Appendix B.6'. |