AI Technology

Google Brain TensorFuzz Debugs Neural Networks with Coverage-Guided Fuzzing

Neural networks can be notoriously difficult to debug, but a Google Brain research team believes it may have come up with a novel solution. A paper by Augustus Odena and Ian Goodfellow introduces Coverage-Guided Fuzzing (CGF) methods for neural networks. The team also announced an open source software library for CGF, TensorFuzz 1.

Neural networks can be notoriously difficult to debug, but a Google Brain research team believes it may have come up with a novel solution. A paper by Augustus Odena and Ian Goodfellow introduces Coverage-Guided Fuzzing (CGF) methods for neural networks. The team also announced an open source software library for CGF, TensorFuzz 1.

Machine learning models can fail when the techniques required are hard to interpret or debug, a problem reflected in the “reproducibility crisis.” Neural networks can be particularly difficult to debug, because of the high computational expense and significant deviation between implementation and the theoretical models.

The Odena and Goodfellow paper TensorFuzz: Debugging Neural Networks with Coverage-Guided Fuzzing is not the first attempt to address issues involving testing and test coverage for neural networks. But the idea of a CGF method for neural networks is new (as far as we know), leveraged from the existing CGF technique to discover errors which occur only for rare inputs. The research describes how to build a useful coverage checker in this context, and how quickly approximate nearest neighbors algorithms can check for coverage in a general way.

The overall structure of the fuzzing procedure is similar to the structure of coverage-guided fuzzers for normal computer programs, except it does not interact with the tested computer program but rather with TensorFlow’s static calculation diagram.

WX20180801-163106@2x.png
Figure 1: Coarse descriptions of the main fuzzing loop. Left: A diagram of the fuzzing procedure, indicating the flow of data. Right: A description of the main loop of the fuzzing procedure in algorithmic form.

The TensorFuzz tool feeds inputs to an arbitrary TensorFlow graph and measures coverage by looking at the “activations” of the computation graph. In coverage-guided fuzzing, random mutations of inputs to a neural network are guided by a coverage metric toward the goal of satisfying user-specified constraints.

The Odena and Goodfellow and Odena paper lists the following experimental results:

  • CGF can efficiently find numerical errors in trained neural networks;
  • CGF surfaces disagreements between models and their quantized versions;
  • CGF surfaces undesirable behavior in character level language models.

The fuzzing procedure and components of fuzzers — input chooser, mutator, objective function, coverage analyzer, etc — are further explored in the paper, which is on arVix: https://arxiv.org/pdf/1807.10875.pdf.


Author: Chenhui Zhang | Editor: Michael Sarazen

0 comments on “Google Brain TensorFuzz Debugs Neural Networks with Coverage-Guided Fuzzing

Leave a Reply

Your email address will not be published.

%d bloggers like this: