Attacks Page

This page performs some adversarial attacks against the model.

Currently we support 3 poisoning attacks:

  • GAN-based poisoning attack

  • Random swapping labels attack

  • Target labels flipping attack

Here we perform the target labels flipping attack with the poisoning rate 20% and the target label "Web".

Clearly, the number of "Web" labels of the poisoning training dataset must be bigger than one of the original dataset.

Last updated