For those who love the look of anime but can’t draw to save their lives, there is hope.
In an August 2017 research paper titled “Towards the Automatic Anime Characters Creation with Generative Adversarial Networks,” Yanghua Jin (with Jiakai Zhang, Minjun Li, Yingtao Tian, Huachun Zhu and Zhihao Fang) theorized he could use Generative Adversarial Network (GAN) to automatically generate appropriately stylistic anime representations of actual human faces.
According to the report, the researchers “… propose a model that produces anime faces at high quality with promising rate of success. Our contribution can be described as three-fold: A clean dataset, which we collected from Getchu, a suitable GAN model, based on DRAGAN, and our approach to train a GAN from images without tags, which can be leveraged as a general approach to training supervised or conditional model without tag data.”
And they’ve succeeded in creating a program that consistently turns out favorable results.
The report’s conclusion states, “There still remain some issues for us for further investigations. One direction is how to improve the GAN model when class labels in the training data are not evenly distributed. Also, quantitative evaluating methods under this scenario should be analyzed, as FID only gives measurement when the prior distribution of sampled labels equals to the empirical labels distribution in the training dataset. This would lead to a measure bias when labels in the training dataset are unbalanced. Another direction is to improve the final resolution of generated images. Super-resolution seems a reasonable strategy, but the model need to be more carefully designed and tested.”
So who is this Yanghua Jin guy? He is a senior undergrad at Fudan University in Shanghai. He has his B.S. in computer science, with honors, and according to his personal bio, “My research interest focuses on deep generative models, especially on the image generation and image transformation with deep learning.”