Simulating Network Paths with Recurrent Buffering Units

Authors: Divyam Anshumaan, Sriram Balasubramanian, Shubham Tiwari, Nagarajan Natarajan, Sundararajan Sellamanickam, Venkat N. Padmanabhan

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental yielding promising results on synthetic and real-world network traces. ... We make three key contributions: ... (3) Efficient and practical solution scales to sequences of length tens of thousands ... yet produces realistic traces in synthetic and real-world network settings (Section 5).
Researcher Affiliation Collaboration 1Microsoft Research India 2University of Maryland, College Park
Pseudocode No The paper presents mathematical equations and describes procedures in text, but does not include a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper does not provide an explicit statement about releasing its source code or a link to a code repository for the described methodology.
Open Datasets Yes Datasets: We (1) design a synthetic benchmark using ns-3 as in (Ashok et al. 2020), consisting of 4200 traces for 4 different TCP protocols, on a variety of cross-traffic patterns and network configurations; and (2) use a subset of traces from a real physical network testbed Pantheon (Yan et al. 2018) for 2 TCP protocols.
Dataset Splits No The paper specifies using 'TCP Cubic protocol ... for training, and the other TCP protocols (Vegas, New Reno, and LEDBAT) for testing' for its experiments, which is a train/test split by protocol. However, it does not explicitly mention a separate 'validation' split of the datasets themselves (e.g., percentages or counts).
Hardware Specification Yes Training RBU on the largest dataset (ns-3) takes only about 3 minutes per epoch on V100 GPU.
Software Dependencies No We implement all the models in Py Torch. ... with their Tensor Flow code. The paper mentions software like PyTorch and TensorFlow but does not specify version numbers for reproducibility.
Experiment Setup Yes For LSTMwin and LSTMpkt, we (a) normalize the delays and the sending rates, and (b) use a 2-layer LSTM with 256 hidden units and a fully connected layer with discretized yt as output (100-dimensional), tuned to maximize mean delay and throughput distribution match, on the training protocol. For RBU, we (a) use the same LSTM architecture, to be consistent, for the window-level model in (7), with discretized cw in (8) as output, (b) set γ = 0.1 in (8) and size of ht in (1) to 1, which works well across datasets, and (c) use single-bottleneck buffer RBU model (just as the ground-truth) for ns-3, and 2-path RBU model for Pantheon. We use stochastic gradient-descent to learn the model parameters jointly, with mini-batching, and weight decay on the model parameters.