Static Automatic Batching In TensorFlow

Authors: Ashish Agarwal

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Benchmarks demonstrate speedups of one to two orders of magnitude on a range of tasks, from Jacobian computation, to auto-batching Graph Neural Networks.
Researcher Affiliation Industry Google Inc.. Correspondence to: Ashish Agarwal <agarwal@google.com>.
Pseudocode No The paper describes algorithms and examples of code usage (e.g., Example 1 shows Python code) but does not include a formally labeled 'Pseudocode' or 'Algorithm' block for its core methods.
Open Source Code No The paper does not contain an explicit statement about the release of its source code or a link to a code repository for the methodology described.
Open Datasets Yes Conv Mnist setup uses a convolutional architecture as described in (tensorflow, 2016). VGG16 is as described in (Simonyan & Zisserman, 2014).
Dataset Splits No The paper does not explicitly provide specific training/validation/test dataset splits (e.g., percentages, sample counts, or explicit mention of standard splits for the datasets used) needed to reproduce the experiment.
Hardware Specification Yes Experiments were run on a 6 core Intel Xeon E5-1650 3.60GHz CPU with 64GB of RAM and a NVIDIA Maxwell Titan X GPU.
Software Dependencies No The paper is focused on extending TensorFlow but does not provide specific version numbers for TensorFlow or any other software dependencies used in the experiments.
Experiment Setup Yes Linear Projection is a simple setup which applies linear projection on input data. Inputs are randomly generated float vectors with shape [768]. Projection matrix is a constant [768, 768] matrix of floats. Conv Mnist setup uses a convolutional architecture as described in (tensorflow, 2016). It is a stack of two conv-relu-maxpool blocks followed by a linear-relu-dropout-linear block. Inputs are batches of [28, 28] images and output has shape [10]. LSTM state size is 256.