Opposite Online Learning via Sequentially Integrated Stochastic Gradient Descent Estimators

Authors: Wenhai Cui, Xiaoting Ji, Linglong Kong, Xiaodong Yan

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, the superior finite-sample performance is evaluated by simulation studies.
Researcher Affiliation Academia 1 Zhongtai Securities Institute for Financial Studies, Shandong University 2 Department of Mathematical and Statistical Sciences, University of Alberta 3 Shandong Province Key Laboratory of Financial Risk 4Shandong National Center for Applied Mathematics {cuiwenhai, jixiaoting}@mail.sdu.edu.cn, lkong@ualberta.ca, yanxiaodong@sdu.edu.cn
Pseudocode Yes Algorithm 1: TAB-based Opposite Online Learning; Algorithm 2: An Extended Two-sided Test
Open Source Code No The paper does not provide any explicit statement or link to open-source code for the described methodology.
Open Datasets No The paper describes generating synthetic data for simulation studies (e.g., "streaming data is generated by the mean model, Z = θ0 + ϵ"), but it does not specify or provide access to a publicly available or open dataset.
Dataset Splits No The paper describes simulation parameters like T and B, but it does not specify explicit training, validation, or test dataset splits for model evaluation.
Hardware Specification No The paper does not provide any specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments or simulations.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, frameworks).
Experiment Setup Yes Input: Sequential data St(t = 1, . . . , T) Set the number of bootstraps B Maximum number of iterations N Hyperparameter d0; γn is equal to γ1n α with γ1 > 0 and α (0.5, 1); T = 500, B = 50; B = 100, T = 1000; T = 200, N = 100, B = 30; T = 30, N = 500, B = 30