Age-related facial analysis with deep learning

[thumbnail of WRAP_Theses_Wang_2020.pdf]
Preview
PDF
WRAP_Theses_Wang_2020.pdf - Submitted Version - Requires a PDF viewer.

Download (10MB) | Preview

Request Changes to record.

Abstract

Age, as an important soft biometric trait, can be inferred based on the appearance of human faces. However, compared to other facial attributes like race and gender, age is rather subtle due to the underlying conditions of individuals (i.e., their upbringing environment and genes). These uncertainties make age-related facial analysis (including age estimation, age-oriented face synthesis and age-invariant face recognition) still unsolved. In this thesis, we study these age-related problems and propose several deep learning-based methods, each tackle a problem from a specific aspect.

We first propose a customised Convolutional Neural Network architecture called the FusionNet and also its extension to study the age estimation problem. Although faces are composed of numerous facial attributes, most deep learning-based methods still consider a face as a typical object and do not pay enough attention to facial regions that carry age-specific features for this particular task. Therefore, the proposed methods take several age-specific facial patches as part of the input to emphasise the learning of age-specific patches. Through extensive evaluation, we show that these methods outperform existing methods on age estimation benchmark datasets under various evaluation matrices.

Then, we propose a Generative Adversarial Network (GAN) model for age-oriented face synthesis. Specifically, to ensure that the synthesised images are within target age groups, this method tackles the mode collapse issue in vanilla GANs with a novel Conditional Discriminator Pool (CDP), which consists of multiple discriminators, each targeting one particular age category. To ensure the identity information xiv is unaltered in the synthesised images, our method uses a novel Adversarial Triplet loss. This loss, which is based on the Triplet loss, adds a ranking operation to further pull the positive embedding towards the anchor embedding resulting in significantly reduced intra-class variances in the feature space. Through extensive experiments, we show that our method can precisely transform input faces into the target age category while preserving the identity information on the synthesised faces.

Last but not least, we propose the disentangled contrastive learning (DCL) for unsupervised age-invariant face recognition. Different from existing AIFR methods, DCL, which aims to learn disentangled identity features, can be trained on any facial datasets and further tested on age-oriented datasets. Moreover, by utilising a set of three augmented samples derived from the same input image, Disentangled Contrastive Learning can be directly trained on small-sized datasets with promising performance. We further modify the conventional contrastive loss function to fit this training strategy with three augmented samples. We show that our method dramatically outperforms previous unsupervised methods and other contrastive learning methods.

Item Type: Thesis [via Doctoral College] (PhD)
Subjects: Q Science > QA Mathematics > QA76 Electronic computers. Computer science. Computer software
T Technology > TA Engineering (General). Civil engineering (General)
T Technology > TK Electrical engineering. Electronics Nuclear engineering
Library of Congress Subject Headings (LCSH): Human face recognition (Computer science), Human beings -- Age determination, Neural networks (Computer science), Biometric identification
Official Date: September 2020
Dates:
Date
Event
September 2020
UNSPECIFIED
Institution: University of Warwick
Theses Department: Department of Computer Science
Thesis Type: PhD
Publication Status: Unpublished
Supervisor(s)/Advisor: Li, Chang-Tsun ; Sánchez, Víctor
Sponsors: Horizon 2020 (Programme)
Format of File: pdf
Extent: xx, 134 leaves : colour illustrations
Language: eng
URI: https://wrap.warwick.ac.uk/160707/

Export / Share Citation


Request changes or add full text files to a record

Repository staff actions (login required)

View Item View Item