Data Sharing and Compression for Cooperative Networked Control

Authors: Jiangnan Cheng, Marco Pavone, Sachin Katti, Sandeep Chinchali, Ao Tang

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our simulations with real cellular, Internet-of-Things (Io T), and electricity load data show we can improve a model predictive controller s performance by at least 25% while transmitting 80% less data than the competing method.
Researcher Affiliation Academia 1School of Electrical and Computer Engineering, Cornell University, Ithaca, NY 2Department of Aeronautics and Astronautics, Stanford University, Stanford, CA 3Department of Computer Science, Stanford University, Stanford, CA 4Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX
Pseudocode Yes Algorithm 1 Compression Co-design for Control
Open Source Code Yes Our code and data are publicly available at https: //github.com/chengjiangnan/cooperative_networked_control.
Open Datasets Yes Our simulations use 4 weeks of stochastic cell demand data from Melbourne, Australia from [34]. Specifically, state xt Rn represents the charge on n batteries and control ut Rm represents how much to charge the battery to meet demand. Timeseries st Rp represents the demand forecast at the locations of the n batteries, where p = n. In the cost function (Eq. 13), we desire a battery of total capacity 2L to reach a set-point where it is half-full, which, as per [3], allows flexibly switching between favorable markets. Further, we set γe = γs = γu = 1. We used electricity demand data from the same PJM operator as in [3], but from multiple markets in the eastern USA [35].
Dataset Splits No No, the paper does not provide specific percentages or counts for train/validation/test splits. It mentions a "test dataset" but no breakdown of the data distribution.
Hardware Specification No No, the paper does not specify the hardware (e.g., GPU/CPU models, memory) used for running the experiments or training the models. It mentions collecting data from "Google Edge Tensor Processing Unit (TPU)" but this refers to the data source, not the experimental compute hardware.
Software Dependencies No No, the paper mentions using "long short term memory (LSTM) DNNs [36] and simple feedforward networks" and the "Adam optimizer" but does not specify version numbers for these or other software libraries/frameworks.
Experiment Setup No No, the paper states "We used standard DNN architectures, hyperparameters, and the Adam optimizer, as further detailed in the supplement." This indicates that the specific details are not in the main text.