Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Learning from time-dependent streaming data with online stochastic algorithms

Authors: Antoine Godichon-Baggioni, Nicklas Werge, Olivier Wintenberger

TMLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate our theoretical findings, we conduct a series of experiments using both simulated and real-life time-dependent data. ... In section 4, we present experimental results that validate our findings. We conduct experiments on synthetic and real-life time-dependent streaming data, showcasing the practical implications of our work.
Researcher Affiliation Academia Antoine Godichon-Baggioni EMAIL Laboratoire de Probabilités, Statistique et Modélisation Sorbonne Université Nicklas Werge EMAIL Laboratoire de Probabilités, Statistique et Modélisation Sorbonne Université Olivier Wintenberger EMAIL Laboratoire de Probabilités, Statistique et Modélisation Sorbonne Université
Pseudocode Yes Algorithm 1: Stochastic streaming gradient estimates (SSG/PSSG/ASSG/APSSG) Inputs : θ0 Θ Rd, project: True or False, average: True or False Outputs : θt, θt (resulting estimates) Initialization: θ0 Rd
Open Source Code No The paper does not contain any explicit statement about releasing their code, nor does it provide a link to a code repository. It mentions 'Reviewed on Open Review: https: // openreview. net/ forum? id= kdfi Eu1ul6', which is a link to the review forum, not code.
Open Datasets Yes The historical hourly weather dataset can be found on https://www.kaggle.com/datasets/selfishgene/ historical-hourly-weather-data.
Dataset Splits No The paper uses 'simulated and real-life time-dependent data' and preprocesses the real-life weather data by 'subtracting the corresponding monthly and annual averages.' However, it does not specify any training, validation, or test splits with percentages or sample counts for experimental reproduction.
Hardware Specification No The paper does not specify any hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No The paper mentions various machine learning concepts and algorithms (e.g., SGD, Adam implicitly through common ML context) but does not provide specific version numbers for any software libraries or dependencies used in their implementation or experiments.
Experiment Setup Yes To facilitate a comparison between different data streams, we set the parameters Cγ = 1, α = 2/3, and β = 0 as fixed values. ... To align with the reasoning of Cardot et al. (2013), we set Cγ = d and choose α = 2/3. ... As discussed after Theorem 2, one way to achieve this acceleration is by setting α = 2/3 and β = 1/3.