The miracles of AI have been taking over every platform of our lives. From the time of Alan Turing until now, AI has evolved significantly.
We are living in the most technologically progressive period of our existence. Because of this, people now have a better vision of the human civilization that includes space exploration, safer transportation, and overall, a better quality of life.
What is more exciting is the immense ocean of possibilities with AI. Though, we still have yet to decipher what will be end results of AI. Not to mention, even scientists cannot agree on the precedent that will come into being because of AI. Stephen Hawking famously once said in 2017, that AI could lead to the end of civilization as we know it.
Others with lesser somber predictions that include researchers and scientists are excited about AI and its effects. But this just goes onto show how difficult it is to have a future perception of AI.
In recent years, a new trend has been popping up in a few headlines which is a concept that sounds pretty outrageous to some and others are left excessively intrigued. It is popular with by the term- Dueling AI.
Hold your horses now! Sadly, you’ll be getting a front seat to a “Fight to the Death Match- Robot Edition” anytime soon. Dueling AI doesn’t include violence to resolve conflicts but a form of discussion that can open up a plethora of technological revolutions.
Who would have thought that scraps of metal would have better communication etiquette than humans?
What Does it Entail?
In a broader aspect, Dueling AI is a training program in which two AI infused machines will be training each other and both would be learning from their errors.
This idea was first given thought of by one of the most famed AI scientists in the world- Ian Goodfellow. The idea is now called as GANs or Generative Adversarial Networks. Ironically enough, the idea came to him after being inebriated at a bar when he was a Ph.D. student at the University of Montreal. Despite its ‘alcoholic’ origins, the idea seemed like a good way to test the limits of AI.
Imagine a game where you have an artist and an art critic. The goal of the artist is to fool the art critic by making him believe the images as real. Because the AI as art critic tries its best to identify images as a fake, the first AI learns to create images that are as real as possible. In this way, both the AIs, push each other to learn more instead of being trained by humans.
The Idea is Born!
The idea was born in the ‘Les 3 Brassuers’ in the French Canadian capital when Ian met a few of his friends, arrived to see off Razvan Paschanu (One of their friends a current researcher at Deepmind, Google).
The idea was dropped in the middle of the conversation to use mathematics to determine the elements that go into a photograph. Later the man with the idea went on about how he would feed these statistics into a machine and let the machine create images based on those.
The Beer-soaked and slightly tipsy Goodfellow said that the statistics are way too many for anyone to read them and therefore, the idea might be not for pragmatic execution. He then proposed that a better way to do such a task might be to involve neural networks into the equation. In this way, he believed, two neural networks would be able to train each other on how to build images that are more realistic.
The inevitable then took place, and an argument ensued. Historically, nothing good comes out of a discussion between drunk friends at a bar. But this was a once-in-a-blue-moon event where the idea generated there, now is a possibility with a gargantuan potential. That day, no one supported or believed that Ian’s idea could work though.
Later he went home and found his girlfriend sleeping. He thought to himself that the feedback from his academic allies was wrong and he stayed up and worked on his coding for GANs. “That was really, really lucky.” he says, “because if it hadn’t of worked, I might have given up on the idea.” That year, Ian and a few of his fellow researchers published a paper and since then many papers have been released that tries to explore this concept in detail.
Teen Years of GANs!
Though a comparatively new technology, it has gained significant milestones in developing itself to be more suited to advanced tasks. This caused the genesis of many other mind-blowing architectures such as DCGAN, Cycle-GAN, Pix2Pix, BigGAN, and many others. So, lets us glance through a few of them:
“Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks” had the first appearance of DCGAN. The paper was the work of Luke Metz, Alex Radford, and Soumith Chantala. Usual issues like training instability, internal co-variate shift, and mode collapse can be possibly tackled by DCGAN. Many GAN architectures have come into existence based on the structure of DCGAN.
If you are someone who is inspired by the collections of Pablo Picasso, Vladimir Kush, and Van Gogh and ever wanted yourself to be pained by them. Well… that is not really possible. So, Cycle-GAN brings the next best thing for art-admirers.
Cycle-GAN allows your selfies to appear as if drawn by a renaissance or a surrealist painter. Not just limited to that. With Cycle-GAN you can throw yourself to a typical image clicked in the quaint black-and-white of the 50s.
It was introduced to the world in the paper titled “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”.
If the input is a distorted image of a cat’s outline, the output might just be an adorable little kitten. Translating sketches to full colored images, giving colored versions of the Queen’s coronation and memorial day footage, Bringing a sunny effect in night images along with many more are excellent aspects of Pix2Pix.
A paper published by an intern with two fellow researchers at Google’s DeepMind division titled “Large Scale GAN Training for High Fidelity Natural Image Synthesis”. A low variation gap and high fidelity were achieved in images generated by BigGAN. The inception score soared from a previous 52.52 to 166.3 which was more than 100% better than the State-of-the-Art.
These results have given BigGAN, one of the highest pedestals possible.
AI is an immense universe of unbounded and unpredictable possibilities. As the above examples clearly show the results when two neural networks train each other in terms of images. We still do not know what other results can be concocted when AI train each other. This explains why AI is a growing field with ample opportunities available. With so many companies investing heavily in AI, it demonstrates the faith people have in AI.
I hope you found this article informative and helpful and if you would like to share any feedback, we would love to hear from you in the comments below.