Learning Positive Functions with Pseudo Mirror Descent
Authors: Yingxiang Yang, Haoxiang Wang, Negar Kiyavash, Niao He
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through simulations, we show that pseudo mirror descent outperforms the state-of-the-art benchmarks for learning intensities of Poisson and multivariate Hawkes processes, in terms of both computational efficiency and accuracy. and In this section, we present numerical results on synthetic and real datasets. |
| Researcher Affiliation | Academia | Yingxiang Yang UIUC yyang172@illinois.edu Haoxiang Wang UIUC hwang264@illinois.edu Negar Kiyavash EPFL negar.kiyavash@epfl.ch Niao He UIUC niaohe@illinois.edu |
| Pseudocode | Yes | Algorithm 1 Pseudo Mirror Descent Algorithm |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that the source code for the proposed pseudo mirror descent algorithm is openly available. |
| Open Datasets | Yes | We used the shot distance data of several professional basketball players over 500 games (available at stats.nba.com). and The dataset we adopted [Chen et al., 2008] consists of 15 DNA sequences |
| Dataset Splits | No | The paper mentions using synthetic and real-world datasets but does not provide specific details on how these datasets were split into training, validation, or test sets, including percentages or sample counts. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments, such as GPU or CPU models, or memory specifications. |
| Software Dependencies | No | The paper mentions software like 'Py Torch' but does not specify version numbers for PyTorch or any other software dependencies, which is required for reproducibility. |
| Experiment Setup | Yes | Detailed parameter settings and additional results can be found in Appendices K and L. and within Appendix K: The step sizes ηk were set to constant values or decreasing sequences (e.g., ηk = 1/(0.01k + 10) for vanishing steps). The parameters for the kernels are polynomial kernel K(x, y) = (1 + xy)2, Sobolev kernel K(x, y) = 1 + min{x, y}. and The number of iterations was T = 100000. Mini-batch size for pseudo-gradient calculation was 10. and For the Hawkes process, the number of iterations was 1000. |