Online Time Series Prediction with Missing Data
Authors: Oren Anava, Elad Hazan, Assaf Zeevi
ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that our algorithm s performance asymptotically approaches the performance of the best AR predictor in hindsight, and corroborate the theoretic results with an empirical study on synthetic and real-world data. 4. Illustrative Examples The following experiments demonstrate the effectiveness of the proposed algorithm under various synthetic settings. |
| Researcher Affiliation | Academia | Oren Anava OANAVA@TX.TECHNION.AC.IL Technion, Haifa, Israel Elad Hazan EHAZAN@CS.PRINCETON.EDU Princeton University, NY, USA Assaf Zeevi ASSAF@GSB.COLUMBIA.EDU Columbia University, NY, USA |
| Pseudocode | Yes | Algorithm 1 LAZY OGD (on ℓ2-ball with radius D), Algorithm 2, Algorithm 3 Efficient Implementation of Algorithm 2, Algorithm 4 OGDIMPUTE |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code or links to a code repository. |
| Open Datasets | No | The paper uses 'synthetic data' for its experiments, as described in Section 4.2. While it mentions 'real-world data' in the abstract, no specific public dataset name, link, or citation is provided for either the synthetic or real-world data used. |
| Dataset Splits | No | The paper conducts experiments but does not explicitly provide information on dataset splits such as train/validation/test percentages, absolute sample counts for splits, or cross-validation setup. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, memory specifications, or cloud instance types used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., programming languages or libraries with their exact versions). |
| Experiment Setup | No | The paper states 'For our algorithm, we used d = 3p in all considered settings' and that results are 'averaged over 50 runs'. However, it does not provide other specific experimental setup details such as learning rates, batch sizes, optimizer settings, or number of epochs for the models used. |