Robust Learning for Smoothed Online Convex Optimization with Feedback Delay
Authors: Pengfei Li, Jianyi Yang, Adam Wierman, Shaolei Ren
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the performance of RCL using a case study focused on battery management in electric vehicle (EV) charging stations. |
| Researcher Affiliation | Academia | Pengfei Li University of California Riverside Riverside, CA, USA pli081@ucr.edu Jianyi Yang University of California Riverside Riverside, CA, USA jyang239@ucr.edu Adam Wierman California Institute of Technology Pasadena, CA, USA adamw@caltech.edu Shaolei Ren University of California Riverside Riverside, CA, USA shaolei@ucr.edu |
| Pseudocode | Yes | Algorithm 1 Online Optimization with RCL |
| Open Source Code | No | The paper does not provide a direct link to source code or explicitly state that the code for their methodology is being released. |
| Open Datasets | Yes | We test RCL on a public dataset provided by Elaad NL, compared with several baseline algorithms including ROBD [26], EC-L2O [12], and Hit Min (details in Appendix B.2). Our results highlight the advantage of RCL in terms of robustness guarantees compared to pure ML models, as well as the benefit of training a robustification-aware ML model in terms of the average cost. |
| Dataset Splits | Yes | We use the January to February data as the training dataset, March to April data as the validation dataset for tuning the hyperparameters such as learning rate, and May to June as the testing dataset. |
| Hardware Specification | Yes | When the RNN model is trained as a standalone optimizer in a robustification-oblivious manner, the training process takes around 1 minute on a 2020 Mac Book Air with 8GB memory and a M1 chipset. |
| Software Dependencies | No | The paper mentions 'Py Torch' as the implementation framework but does not specify a version number or other key software dependencies with versions. |
| Experiment Setup | Yes | We use a recurrent neural network (RNN) model that contains 2 hidden layers, each with 8 neurons, and implement the model using Py Torch. We train the RNN for 140 epochs with a batch size of 50. |