Symbolic Distillation for Learned TCP Congestion Control

Authors: S P Sharan, Wenqing Zheng, Kuo-Feng Hsu, Jiarong Xing, Ang Chen, Zhangyang Wang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate the performance of our distilled symbolic rules on both simulation and emulation environments.
Researcher Affiliation Academia S P Sharan1, Wenqing Zheng1, Kuo-Feng Hsu2, Jiarong Xing2, Ang Chen2, Zhangyang Wang1 1University of Texas at Austin 2Rice University
Pseudocode No The paper includes Figure 4, which is a diagrammatic representation of a decision tree (symbolic policy), but it does not contain textual pseudocode or a clearly labeled algorithm block.
Open Source Code Yes Our code is available at https://github.com/VITA-Group/Symbolic PCC.
Open Datasets Yes PCC-RL [7] is an open-source RL testbed for simulation of congestion control agents based on the popular Open AI Gym [43] framework. We adopt it as our main playground.
Dataset Splits Yes The return of the saved roll-outs are clustered using K-Means Clustering, and the optimal cluster number is found to be 4 using the popular elbow curve [52] and silhouette analysis [53] methods.
Hardware Specification No The paper does not provide specific details regarding the hardware used for experiments (e.g., CPU/GPU models, memory, or cloud provider instances).
Software Dependencies No The paper mentions software components such as PPO, Open AI Gym, Mininet, and Pantheon, but it does not specify any version numbers for these or any other software dependencies.
Experiment Setup Yes More hyperparameter details are in our Appendix.