Deep-learning technology has made big strides over the last few years, enabling computers to learn from images, recognize human faces and their emotions, and even diagnose X-ray readings.
The technology has opened up a series of new possibilities such as self-driving cars and chatting robots.
So far, however, these artificial intelligence (AI) systems are good at recognizing things, but not at creating things.
The newly-developed generative adversarial network (GAN) algorithms are probably going to change that.
Inspired by a casual conversation, Google scientist Ian Goodfellow came up with the idea of pitting one neural network against another.
There are two main components of GAN: generator neural network and discriminator neural network.
Professor Qiang Yan from the Hong Kong University of Science and Technology has shown how the notion works when she adopted GAN to enable a machine to write calligraphy.
Imagine a machine that plays the role of a student who is trying to imitate the work of master calligraphers. Another machine serves as the teacher by providing feedback.
Through the interaction between the two, the work of the student machine could become as good as the original one.
In the future, AI could not only create beautiful paintings or compose music but also reduce dependence on human guidance.
Machines will be able to better absorb raw data and figure out how to learn by themselves, marking a huge breakthrough in unguided learning in AI.
The most apparent applications of GAN are not only in areas involving large amounts of images, such as in fashion or interior design, but also in analyzing the effects of medicine, for instance.
Yann LeCun, the head of Facebook’s internal artificial intelligence research division, called GAN the “coolest idea in the deep learning sector over the last 20 years”.
This article appeared in the Hong Kong Economic Journal on June 14
Translation by Julie Zhu
[Chinese version 中文版]
– Contact us at [email protected]