Login to UWE

PhotoFace: Face recognition using photometric stereo

Jump to:

1. Face reconstruction | 2. PhotoFace Database | 3. Face Recognition | 4. Reflectance Analysis


The PhotoFace project covers two EPSRC funded grants that started in April 2007. The aims are as follows:

  1. Use high-speed photometric stereo to rapidly capture facial geometry.
  2. Capture a new 3D face database for testing within the project and for the benefit of the worldwide face recognition research community.
  3. Apply novel and existing state-of-the-art face recognition algorithms to the dataset.
  4. Capture skin reflectance data in order to generate synthetic poses of any face captured by the device.

Stages 1 to 3 were researched in partnership with the Communications and Signal Processing Group at Imperial College London. We also worked in collaboration with the Home Office Scientific Development Branch (now called the Home Office Centre for Applied Science and Technology), General Dynamics UK. For Stage 4, we are working with the University of Central Lancashire.

1. Face reconstruction

Photoface device boothThe device we have constructed is shown here. As detailed in our 2008 CVIU paper, both visible and near infrared lights are feasible solutions and the latter gives margnially superior reconstructions.

We currently operate with five light sources and a camera operating at 210fps. The total capture time is or of the order 30ms with high-speed synchronisation based on Field Programmable Gate Array (FPGA)Technology.

The figure below shows an example of a raw image set recovered using the device.

 lighting variation

Application of Lambertian photometric stereo then gives the following field of surface normals:


The figures below show the result of integrating the surface normals to recover a depth map. The second image shows the result of warping one of the raw images onto the surface.

 3D image



3D image with overlay

In addition to designing the hardware for this work we have published a range of papers in related areas (see Gary Atkinson's webpage). These papers cover a range of new techniques relating to image alignment, feature detection, the effects of makeup and facial hair on the accuracy of the reconstruction, and optimising the reconstructions with the addition of a profile-view camera.

2. Photoface Database

One area of particular interest was the construction of a database of raw face images. This unique database is very different from existing databases in several respects:

  • Each record consists of four images of a face, corresponding to each of the light sources of our photometric stereo rig.
  • The various volunteers were imaged on many occasions over a period of months, allowing extensive testing of our new methods as people change over time. These changes may be due to expression/mood, pose, headgear, hair (including facial hair), tanning, injury, etc.
  • The data was collected from a real working environment (at General Dynamcis UK, South Wales), rather than in controlled laboratory settings. This is in line with the laboratory’s aim to develop machine vision techniques for real-world applications.

This unique 3D face database is amongst the largest currently available, containing 3187 sessions of 453 subjects, captured in two recording periods of approximately six months each. The Photoface device was located in an unsupervised corridor allowing real-world and unconstrained capture. Each session comprises four differently lit colour photographs of the subject, from which surface normal and albedo estimations can be calculated (photometric stereo MATLAB code implementation included). This allows for many testing scenarios and data fusion modalities.

Eleven facial landmarks have been manually located on each session for alignment purposes.

Additionally, the Photoface Query Tool is supplied (implemented in MATLAB), which allows for subsets of the database to be extracted according to selected metadata e.g. gender, facial hair, pose, expression.

The Photoface database is available to download for research purposes - please see our 2011 CVPR Workshop paper or e-mail Gary Atkinson for further details.

3. Face Recognition

This part of the project aimed to optimise recognition algorithms to the acquired data and considered effects such as

  • the specific reconstruction methods that optimise recognition rates,
  • the inclusion of advanced photometric stereo methods (e.g. to account for shadow and specularity).
  • the choice subspace mapping (PCA, LDA, etc.)

This results will soon become available via an IEEE TIFS paper, which also presents results of a Elastic Graph Matching approach to the recognition problem.

Further work concentrated on novel methods for dimensionality reduction for face recognition. This involves the use of a psychologically inspired approach to isolate specific pixels within the face and optimal resolutions that are used by humans and emulate this using machine vision (see BMVC 2011 workshop paper).

We also discovered that surface normals are particularly well compressed using the ridgelet transform, whilst maintaining highly discriminating information. Indeed, we acheived 100% recognition with this approach on major subsets of our database, as reported in our Pattern Recognition 2012 paper

Finally, in collaboration with the University of Bath, we designed a recognition algorithm based on the nose ridge shape.

4. Reflectance Analysis

For some applications, it may be useful to compare 3D (or 2.5D) data to 2D images. In these cases it is necessary to use the 2.5D data to render images that have matching illumination conditions to the 2D images. A video illustrating our ability to re-render images in this way can be viewed in either, a greyscale AVI video, or a colour AVI video.

These illustrate the usefulness of the reflectance data that emerges from photometric stereo - namely the surface albedo map. In our new EPSRC project, supported by the University of Central Lancashire and General Dynamics UK, we are looking to relaibly capture Bidirectional Reflectance Distribution Function (BRDF) data for each face scanned by the system. This can then be used to simultaneously render synthetic face images and enhance the quality of the reconstruction. More details to follow.


Grant nos. EPSRC EP/E028659/1, EP/I003061/1

This article is translated to Serbo-Croatian language by Anja Skrba from

Page last updated 26 April 2013

Copyright 2015 © UWE better together