AI Technology

CapsAttacks: Testing Adversarial Attacks on Capsule Networks

Researchers from Technische Universität Wien, Austria, and Politecnico di Torino, Italy, explored adversarial attacks on Capsule Networks, proposing an algorithm which automatically generates targeted adversarial examples in black-box attack scenarios.

Convolutional Neural Networks (CNNs) have been proven vulnerable to attacks by adversarial samples. These slight image modifications are generally imperceptible to human eyes, but are capable of misleading CNN-based computer vision models for example into recognizing a stop sign as a speed limit sign, or a rifle as a helicopter. Researchers have now discovered that Capsule Networks (CapsuleNet) — the promising machine learning system promoted by Geoffrey Hinton that can capture spatial relationships between objects — may also be vulnerable to adversarial attacks.

Researchers from Technische Universität Wien, Austria, and Politecnico di Torino, Italy, explored adversarial attacks on Capsule Networks, proposing an algorithm which automatically generates targeted adversarial examples in black-box attack scenarios.

This research focused on Capsule Networks vulnerability to adversarial attacks on the German Traffic Sign Recognition Benchmark (GTSRB), which plays a key role in autonomous vehicle use cases. Researchers set out to determine:

  • Is a CapsuleNet vulnerable to adversarial examples?
  • How does CapsuleNet vulnerability to adversarial attacks differ from CNN vulnerability?
Affine transformations on the test images, with the corresponding classification predictions made by the CapsuleNet, the VGGNet and the LeNet. (a) Example of a “30 km/h speed limit” sign. (b) Example of a “Stop” sign.

To answer these questions, researchers compared CapsuleNet performance with a 5-layer CNN (LeNet) and a 9-layer CNN (VGGNet) with affine transformations applied to the input images of the GTSRB dataset. Compared to the VGGNet, the CapsuleNet’s capsules and advanced algorithms were able to overcome lower complexity in terms of number of layers and parameters. However the results also showed that both the CapsuleNet and the VGGNet can be fooled by some affine transformations such as zoom or shift, although the CapsuleNet’s confidence was lower. Both the CapsuleNet and the VGGNet were able to correctly classify an example image that was rotated by 30 degrees, while this twist fooled the LeNet.

Researchers then developed a novel algorithm (shown above) that could automatically generate targeted imperceptible and robust adversarial examples in a black-box scenario to fool the network. They compared the robustness of the CapsuleNet with a 5-layer LeNet and a 9-layer VGGNet using the adversarial examples generated by this algorithm.

The results showed that the 9-layer VGGNet’s vulnerability to adversarial attacks was slightly lower than the CapsuleNet’s, as the VGGNet requires more perceivable perturbations to be fooled. Although the CapsuleNet has a much higher learning capability than the VGGNet, this was not reflected in the prediction confidence.

The paper CapsAttacks: Robust and Imperceptible Adversarial Attacks on Capsule Networks is on arXiv.


Author: Yuqing Li | Editor: Michael Sarazen

0 comments on “CapsAttacks: Testing Adversarial Attacks on Capsule Networks

Leave a Reply

Your email address will not be published.

%d bloggers like this: