Online robust non-stationary estimation

Authors: Abishek Sankararaman, Balakrishnan Narayanaswamy

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We complement our theoretical results empirically on synthetic and real data. and We complement our theoretical results empirically on synthetic and real data. and 7 Experiments
Researcher Affiliation Industry {abisanka, muralibn}@amazon.com, Amazon Web Services, Santa Clara CA, USA.
Pseudocode Yes Algorithm 1 Clipped-SGD and Algorithm 2 Online-Clipped-SGD without time horizon
Open Source Code No No explicit statement about providing source code or a link to a code repository found.
Open Datasets Yes Dataset Stream-length T Task Dimension d Electricity NSW [26, 4] 45312 Binary classification 13 Mini Boone [49, 4] 130065 Binary Classification 50 MNIST [34] 11811 Anomaly Detection 784 Table 3: Real data-sets used. (Citations [4], [26], [34], [49] point to publicly available datasets like UCI repository and MNIST.)
Dataset Splits No No explicit mention of training, validation, or test dataset splits with percentages or sample counts for reproducibility. The evaluation is described as an online streaming process.
Hardware Specification No No specific hardware details (e.g., CPU, GPU models, or memory) are provided for the experimental setup.
Software Dependencies No No specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x) are provided for reproducibility.
Experiment Setup Yes We compare clipped-SGD with learning rates ηt := 1/m T α and clipping values λ = 2T β for a variety of α and β. and For the case of α = 1, we use the learning rate of ηt = 1/(m(t + 1)) and λ = 2T as suggested in [55]. and We set the gradient clipping value λ = 1 for both datasets. and consider clipped SGD with clip-value set to 5 for various learning rates as shown in Figure 5c.