What is Adversarial AI?
In case you haven’t been introduced to Adversarial AI, its name may elicit ideas of a dispassionate intelligence in opposition to the goals of man, à la Terminator, Wintermute, or any other dystopian cyberpunk future concerned with man’s future competition with artificial super-intelligence.
However, the method of Adversarial AI, or Generative Adversarial Networks (GANs), is the production, check, and recursive refinement of a skill by the interplay of two neural networks. One network—the generative network—is working to create a new data, and the other network—the discriminative network—is working to evaluate the results and provide feedback to the first.
For the moment, let’s set aside the practical applications for the moment and focus on some difficulties that the deep learning community has been wrestling with what Adversarial AI manages to surmount.
Adversarial AI Works with Minimal Labeled Data Pools.
The first hurdle to training a neural network on a new subject is the need for a large pool of labeled data sets. The creation of this pool of data is a monotonous and extremely time-consuming job. Adversarial networks, however, start from a different perspective. The valuable data is the data both network learn from, and that is the data provided by the generative network. The real data set that the discriminative network has access to can remain relatively small, by comparison to other methods.
Training with Less Human Supervision
The dyadic structure of Adversarial AI doesn’t exclude human supervision entirely, but the majority of work can be done independent of humans. Notice, the simple tweet by Chris Olah above doesn’t include a human participant. Some have argued, as Luke Gutzwiller that humans cannot be removed from training loops. He makes a point that currently, “image classification uses deep learning, as do many of the most exciting approaches to generative models, like generative adversarial networks. Deep learning is very good at identifying statistical regularities in large datasets, but recent work has shown it can also be fooled by the addition of certain kinds of statistical noise to its inputs.”
I agree, the end result will be evaluated by a human observer. This is a step that cannot be avoided—at least not yet. Conceptually, however, it runs on its own, till the desired result is approximated. Of course, Gutzweiller’s article was from May of 2017, and advancement has been made since then. Engineers Parham Aarabi and Avishek Bose, at the University of Toronto used adversarial AI to create a filter that makes minute changes to a photo disrupt a facial recognition algorithm. This idea leverages the small perturbations that Gutzwiller is referencing and makes them useful. Theoretically, this could add a layer of privacy to your photos on social media. You can read about this here.
High Fidelity Results
One of the most interesting results from this low data, low human supervision strategy of Adversarial AI is that it creates very high-quality results. Ian Goodfellow, presenting in 2016 showed several comical results in which adversarial AI produced images of animals that had many characteristics of realistic animals that lacked any discernible anatomy. One set of early results shows the need for generative network to account for the number of features a particular animal has in its anatomy. Another early result showed the need to train the network to also account for the perspective of the animal in the image. However, these were not the only examples, and a contemporary of his (noted only as Nguyen et al 2016) showed some great results with animals and scenes that are both recognizable and high quality.
[embed]
[/embed]
If you’re more interested in the abilities and potential of these networks I encourage you to watch Goodfellow’s introduction linked above. He states at the end of this talk that the idea for this comes from contemporary human psychology, and specifically the ideas put out by Anders Ericsson.
“… the way to become really good at any particularly task is to do a task a lot. But also to do deliberate practice. You’re not just putting in a lot of hours you’re specifically choosing subtasks within the skill you’re trying to get good at, that are especially difficult for you and getting feedback from an expert who coaches you. You can think of Adversarial training as capturing both of these aspects of developing a skill. Rather than just training on lots and lots of training examples, you’re training on the worst-case inputs that are really hard for the model. And in the case of adversarial networks you have an expert, the discriminator, coaching the generator on what it should have done instead. So, a lot of insights from human psychology and human learning are actually telling us how we can make machine learning more effective.”
If you’re interested in more applications of Adversarial AI, please enjoy the following links, or contact us here.
To learn more about our Deep Learning, visit our page here.
The Benefits of Adversarial AI
What is Adversarial AI?
In case you haven’t been introduced to Adversarial AI, its name may elicit ideas of a dispassionate intelligence in opposition to the goals of man, à la Terminator, Wintermute, or any other dystopian cyberpunk future concerned with man’s future competition with artificial super-intelligence.
However, the method of Adversarial AI, or Generative Adversarial Networks (GANs), is the production, check, and recursive refinement of a skill by the interplay of two neural networks. One network—the generative network—is working to create a new data, and the other network—the discriminative network—is working to evaluate the results and provide feedback to the first.
For the moment, let’s set aside the practical applications for the moment and focus on some difficulties that the deep learning community has been wrestling with what Adversarial AI manages to surmount.
Adversarial AI Works with Minimal Labeled Data Pools.
The first hurdle to training a neural network on a new subject is the need for a large pool of labeled data sets. The creation of this pool of data is a monotonous and extremely time-consuming job. Adversarial networks, however, start from a different perspective. The valuable data is the data both network learn from, and that is the data provided by the generative network. The real data set that the discriminative network has access to can remain relatively small, by comparison to other methods.
Training with Less Human Supervision
The dyadic structure of Adversarial AI doesn’t exclude human supervision entirely, but the majority of work can be done independent of humans. Notice, the simple tweet by Chris Olah above doesn’t include a human participant. Some have argued, as Luke Gutzwiller that humans cannot be removed from training loops. He makes a point that currently, “image classification uses deep learning, as do many of the most exciting approaches to generative models, like generative adversarial networks. Deep learning is very good at identifying statistical regularities in large datasets, but recent work has shown it can also be fooled by the addition of certain kinds of statistical noise to its inputs.”
I agree, the end result will be evaluated by a human observer. This is a step that cannot be avoided—at least not yet. Conceptually, however, it runs on its own, till the desired result is approximated. Of course, Gutzweiller’s article was from May of 2017, and advancement has been made since then. Engineers Parham Aarabi and Avishek Bose, at the University of Toronto used adversarial AI to create a filter that makes minute changes to a photo disrupt a facial recognition algorithm. This idea leverages the small perturbations that Gutzwiller is referencing and makes them useful. Theoretically, this could add a layer of privacy to your photos on social media. You can read about this here.
High Fidelity Results
One of the most interesting results from this low data, low human supervision strategy of Adversarial AI is that it creates very high-quality results. Ian Goodfellow, presenting in 2016 showed several comical results in which adversarial AI produced images of animals that had many characteristics of realistic animals that lacked any discernible anatomy. One set of early results shows the need for generative network to account for the number of features a particular animal has in its anatomy. Another early result showed the need to train the network to also account for the perspective of the animal in the image. However, these were not the only examples, and a contemporary of his (noted only as Nguyen et al 2016) showed some great results with animals and scenes that are both recognizable and high quality.
[embed]
[/embed]
If you’re more interested in the abilities and potential of these networks I encourage you to watch Goodfellow’s introduction linked above. He states at the end of this talk that the idea for this comes from contemporary human psychology, and specifically the ideas put out by Anders Ericsson.
“… the way to become really good at any particularly task is to do a task a lot. But also to do deliberate practice. You’re not just putting in a lot of hours you’re specifically choosing subtasks within the skill you’re trying to get good at, that are especially difficult for you and getting feedback from an expert who coaches you. You can think of Adversarial training as capturing both of these aspects of developing a skill. Rather than just training on lots and lots of training examples, you’re training on the worst-case inputs that are really hard for the model. And in the case of adversarial networks you have an expert, the discriminator, coaching the generator on what it should have done instead. So, a lot of insights from human psychology and human learning are actually telling us how we can make machine learning more effective.”
If you’re interested in more applications of Adversarial AI, please enjoy the following links, or contact us here.
To learn more about our Deep Learning, visit our page here.