Thanks to the recent progress in artificial intelligence and particularly, in deep learning, it is now possible to recognize facial traits, such as gender and age, and to change their appearance in a given face image. For example, a person from an ancient photo can be aged to look 30 years older. Such technologies open new exciting opportunities in personalization of services and robust identity verification from faces.
Gender and age as key facial traits
Being a “window to the soul” according to some psychologists, the human face is a rich source of information about a person. In particular, gender and age are facial traits which are useful in many practical scenarios.
Indeed, imagine an automatic vending machine equipped with a simple camera which prevents minors from buying alcohol or tobacco, or a humanoid robot which uses the proper form of salutation depending on the gender of a person. Today, automatic gender recognition and age estimation are already broadly used in commerce to profile the customers who are interested in certain products and to eventually target the advertisements. From a more general perspective, for big IT companies, such as Orange, recognition of facial traits is essential for automatic indexing of large collections of human photos.
Moreover, some particular scenarios require not only the possibility to estimate the age in a photo, but to also change it. For example, this is the case for border control when an old passport portrait might be aged to better match the actual age of the passport’s owner, or for police applications, when only a juvenile photo of a researched individual is available.
Deep learning approach
Researchers from the Multimedia contents Analysis technologieS (MAS) team of Orange Labs have recently proposed cutting edge solutions for the problems of automatic gender recognition and age estimation from faces as well as for aging and rejuvenation of photos. The developed technologies are based on deep learning, the domain of artificial intelligence which has experienced a breathtaking progress in recent years. In particular, deep learning has revolutionized the domain of image recognition by surpassing the human performances on numerous tasks such as face recognition.
A deep learning model is an artificial neural network with many hidden layers. It is trained to perform certain tasks (e.g., gender recognition and age estimation) by learning from a provided set of examples. The key parameter of such model is its generalization capacity which shows how well the model performs on new examples outside of the training dataset. Deep learning models are able to scale up to large amounts of training examples and to generalize remarkably better than alternative approaches. This ability to generalize the learned knowledge is also a vital part of the human intelligence, because it allows to understand and to interpret the world around us based on our experience. For example, a child who has seen several cars is already capable of identitifying a vehicle which she / he has never seen before as a car.
Recognizing gender and age from faces
Learning deep neural networks to recognize gender and age in human faces requires a huge amount of training examples (e.g. photos which are annotated with the respective genders and ages). Thus, the Orange Labs researchers used about 250 000 photos of celebrities and the corresponding meta-data which are publicly available in the IMDb movie database and Wikipedia for training.
After data preparation and design of the neural networks, the developed solutions are able to recognize gender with a precision of about 99% and to estimate age with an average error of 4 years, when evaluated in “uncontrolled” conditions (i.e. on real-life photos of varying quality). As a result, in 2016, the Orange Labs team won an international competition on apparent age estimation. Moreover, the age estimation solution has been shown competitive even in comparison with human participants of the popular French TV game “Guess My Age” which is aired on the C8 channel.
Editing age in faces
Aging or rejuvenation of a given face is a much more complex task than age estimation from a scientific point of view. Indeed, the former requires global modelling of the whole face anthropometry. In particular, the age appearance editing must preserve the identity of the person in the original photo.
Orange researchers addressed this challenge by employing Generative Adversarial Networks (GANs), which is a relatively new class of deep learning models proposed in 2015. A typical GAN is a pair of neural networks trained together: the first one (called “generator”) produces synthetic faces, and the second one (called “discriminator”) evaluates whether the synthesized faces are plausible or not. The discriminator can be seen as an art critic who advises an artist (i.e. the generator) on how to improve the quality of her / his paintings.
The Generator of a basic GAN is only able to produce naturally looking random faces, but the Orange Labs researchers have proposed a GAN-based algorithm for age editing in a given face. For example, the figure below illustrates aging and rejuvenation of photos of the two authors of the described approach and of the present blog post.
Orange Labs gender/age recognition and editing algorithms have attracted attention of several important French and international scientific medias. Moreover, the developed technology was integrated in the face analysis engine of the team which also contains face detection, face tracking, landmark detection, identity recognition and emotion recognition parts. This engine, which was exposed at the annual “Salon de la Recherche” of Orange in 2017 (the video below illustrates a part of our demo which was devoted to the solutions presented in this post), therefore constitutes a complete facial analysis toolkit, similar to the ones proposed by Google, Microsoft and Amazon.
At the same time, the progress in deep learning has currently achieved an unprecedented pace, and significant breakthroughs are being made in a weekly manner. Therefore, the Orange researchers continue to improve their solutions and to extend them to the analysis of other facial traits, most notably human emotions.