TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing

Authors: Augustus Odena, Catherine Olsson, David Andersen, Ian Goodfellow

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This section presents experimental results from four different settings. For some of these results we compare with a random search baseline, and for some we don t compare to any sort of baseline.
Researcher Affiliation Industry Augustus Odena 1 Google Brain Catherine Olsson 2 Open Philanthropy Project (work done while at Google Brain) David G. Andersen 1 Ian Goodfellow 3 Work done while at Google Brain.
Pseudocode Yes Algorithm 1 Fuzzer Main Loop
Open Source Code Yes Finally, we release an open source library called Tensor Fuzz that implements the described techniques.
Open Datasets Yes To test this hypothesis, we trained a fully connected neural network to classify MNIST (Le Cun et al., 1998) digits.
Dataset Splits Yes We trained the model for 35000 steps with a mini-batch size of 100, at which point it had a validation accuracy of 98%.
Hardware Specification No The paper mentions GPUs generally ('the GPU is always saturated') and discusses computational costs, but does not specify exact hardware models (e.g., GPU type, CPU type, memory) used for the experiments.
Software Dependencies No The paper mentions using 'Tensor Flow' and 'FLANN (Muja & Lowe, 2014)' but does not specify version numbers for these software dependencies, which is required for reproducibility.
Experiment Setup Yes To test this hypothesis, we trained a fully connected neural network to classify MNIST (Le Cun et al., 1998) digits. We performed fault injection by using a poorly implemented cross entropy loss so that there would be a chance of numerical errors. We trained the model for 35000 steps with a mini-batch size of 100, at which point it had a validation accuracy of 98%.