Pursuit-Evasion Without Regret, with an Application to Trading
Authors: Lili Dworkin, Michael Kearns, Yuriy Nevmyvaka
ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using historical market data, we show experimentally that our algorithm has a strong advantage over classic no-regret approaches. |
| Researcher Affiliation | Academia | Lili Dworkin LDWORKIN@SEAS.UPENN.EDU Michael Kearns MKEARNS@CIS.UPENN.EDU Yuriy Nevmyvaka YURIY.NEVMYVAKA@GMAIL.COM Computer and Information Science, University of Pennsylvania |
| Pseudocode | Yes | Algorithm 1 Pursuit-Evasion Without Regret (PEWR) |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | No | The paper states it uses 'a real dataset containing prices of two exchange-traded funds, the S&P 500 (SPY) and the Russell 2000 (IWM)', but it does not provide concrete access information (link, DOI, repository, or formal citation with authors/year) for this specific dataset. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning. It only mentions '1000 trials, each on a random subsequence of length 5000' which refers to trial execution, not dataset splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers. |
| Experiment Setup | Yes | The value of the maximum allowed step size ϵ was chosen so that the typical trade size of the experts and algorithms was approximately two shares in magnitude. Additionally, the learning rate η of both algorithms was set to be 0.05. |