Face Recognition

Publication 1

Abhijith Punnappurath, A. N. Rajagopalan, Sima Taheri, Rama Chellappa, and Guna Seetharaman, ``Face Recognition across Non-Uniform Motion Blur, Illumination, and Pose", IEEE Transactions on Image Processing, Vol.24, No.7, pp.2067-2082, July 2015.

[Listed in IEEE Signal Processing Magazine (March 2016 issue) as one of the top ten most downloaded papers in IEEE Transactions on Image Processing in the last one year.]

Abstract

Existing methods for performing face recognition in the presence of blur are based on the convolution model and cannot handle non-uniform blurring situations that frequently arise from tilts and rotations in hand-held cameras. In this paper, we propose a methodology for face recognition in the presence of space-varying motion blur comprising of arbitrarily-shaped kernels. We model the blurred face as a convex combination of geometrically transformed instances of the focused gallery face, and show that the set of all images obtained by non-uniformly blurring a given image forms a convex set. We first propose a non-uniform blur-robust algorithm by making use of the assumption of a sparse camera trajectory in the camera motion space to build an energy function with l1-norm constraint on the camera motion. The framework is then extended to handle illumination variations by exploiting the fact that the set of all images obtained from a face image by non-uniform blurring and changing the illumination forms a bi-convex set. Finally, we propose an elegant extension to also account for variations in pose.

Paper

IEEE

Citation

@ARTICLE{TIP_Abhijith,
author={A. Punnappurath and A. N. Rajagopalan and S. Taheri and R. Chellappa and G. Seetharaman},
journal={IEEE Transactions on Image Processing},
title={Face Recognition Across Non-Uniform Motion Blur, Illumination, and Pose},
year={2015},
volume={24},
number={7},
pages={2067-2082},
doi={10.1109/TIP.2015.2412379},
ISSN={1057-7149},
month={July},}

Publication 2

Abhijith Punnappurath and A. N. Rajagopalan, "Recognizing blurred, nonfrontal, illumination, and expression variant partially occluded faces," Journal of the Optical Society of America A, 33, pp. 1887-1900, 2016.

Abstract

The focus of this paper is on the problem of recognizing faces across space-varying motion blur, changes in pose, illumination, and expression, as well as partial occlusion, when only a single image per subject is available in the gallery. We show how the blur incurred due to relative motion between the camera and the subject during exposure can be estimated from the alpha matte of pixels that straddle the boundary between the face and the background. We also devise a strategy to automatically generate the trimap required for matte estimation. Having computed the motion via the matte of the probe, we account for pose variations by synthesizing from the intensity image of the frontal gallery, a face image that matches the pose of the probe. To handle illumination and expression variations, and partial occlusion, we model the probe as a linear combination of nine blurred illumination basis images in the synthesized non-frontal pose, plus a sparse occlusion. We also advocate a recognition metric that capitalizes on the sparsity of the occluded pixels. The performance of our method is extensively validated on synthetic as well as real face data.

Paper

OSA

Citation

@article{JOSA_Abhijith, 
author = {Abhijith Punnappurath and Ambasamudram Narayanan Rajagopalan}, 
journal = {J. Opt. Soc. Am. A}, 
number = {9}, 
pages = {1887--1900}, 
publisher = {OSA},
title = {Recognizing blurred, nonfrontal, illumination, and expression variant partially occluded faces}, 
volume = {33}, 
month = {Sep},
year = {2016},
url = {http://josaa.osa.org/abstract.cfm?URI=josaa-33-9-1887},
doi = {10.1364/JOSAA.33.001887},}