Enhancing face pose normalization with deep learning

In this study, we propose a hybrid method for face pose normalization, which combines the 3-D model-based method with stacked denoising autoencoder (SDAE) deep network. Instead of applying a mirroring operation for the invisible face parts of the posed image, SDAE learns how to fill in those regions by a large set of training samples. In the performance evaluation, we compare the proposed method to four different pose normalization methods and investigate their effects on facial emotion recognition and verification problems in addition to visual quality tests. Methods evaluated in the experiments include 2-D alignment, 3-D model-based method, pure SDAE-based method, and generative adversarial network-based normalization method. Experiments performed on Multi-PIE dataset show that the proposed method produces visually reasonable results and outperforms the others in facial emotion recognition. On the other hand 2-D alignment is sufficient in the verification problem where the detailed face characteristics should be preserved in the normalization process.