Data center cooling using model-predictive control

Authors: Nevena Lazic, Craig Boutilier, Tyler Lu, Eehern Wong, Binz Roy, MK Ryu, Greg Imwalle

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the performance of our MPC approach w.r.t. the existing local PID method on a large-scale DC.
Researcher Affiliation Industry Nevena Lazic, Tyler Lu, Craig Boutilier, Moonkyung Ryu Google Research {nevena, tylerlu, cboutilier, mkryu}@google.com Eehern Wong, Binz Roy, Greg Imwalle Google Cloud {ejwong, binzroy, gregi}@google.com
Pseudocode No The paper describes the control optimization and system identification process using equations and text, but it does not include a structured pseudocode block or algorithm figure.
Open Source Code No The paper does not provide any statement or link indicating that the source code for their methodology is publicly available.
Open Datasets No The paper states that the models were trained on '3 hours of deliberate exploration data' and 'a week of historical data generated by local PID controllers', which implies custom-collected data and no mention of public availability or access.
Dataset Splits Yes Each time step corresponds to a period of 30s, and we set T = 5 based on cross-validation.
Hardware Specification No The paper describes the data center environment and its components (e.g., 'large-scale data center', 'server floor', 'AHUs'), but it does not provide specific details about the CPU, GPU, or other hardware used for training or running the control models.
Software Dependencies No The paper mentions 'Tensor Flow [1]' but does not provide a specific version number for it or any other software dependencies.
Experiment Setup Yes Each time step corresponds to a period of 30s, and we set T = 5 based on cross-validation. While we optimize over the entire trajectory, we only execute the optimized control action at the first time step. Re-optimizing at each step enables us to react to changes in disturbances and compensate for model error. We specify the above objective as a computation graph in Tensor Flow [1] and optimize controls using the Adam [19] algorithm. In particular, we implement constraints by specifying controls as uc i[τ] = max(uc min, min(uc max, uc i[τ 1] + ctanh(zc i [τ]))).