Projects in the Centre for Machine Vision

Details of current and recent projects are summarised below.

Underfloor Insulation

Autonomous Exploration, Data Acquisition, Processing and Visualisation

An Innovate UK funded project in conjunction with Q-Bot Ltd. This project aimed to advance robotics developed and operated by Q-Bot. In specific, autonomously mapping and insulating underfloor environments in buildings.

Mainly older buildings lose a lot of heat via underfloor voids and updrafts. Q-Bot has successfully developed an integrated robotics systems. It is capable of mapping underfloor environments through a semi-automated robotic SLAM system. Insulation is then sprayed semi-automated onto the underside of the floorboards from below.

An example of the insulated floorboards and result of SLAM are shown below:

Mapping insulationInsulation device

The aim is to further automate the path-planning process of the system and  introduce computer vision methods.

In order to automatically classify environment regions into key components that affect the need to insulate, or lack thereof.

Examples of such features include walls, boards, pipes, vents and electrical cables.

Automatic weed imaging and analysis

Background

Agricultural techniques for weed management in crop fields involves the wide-scale spraying of herbicides. This is economically and environmentally expensive. 

An increasing global population requires an increasing crop output, which requires efficient use of agricultural land.

By controlling weed growth a higher yield can be maintained. In order to reduce the amount of herbicides used, we need to identify the location and structure of weed clusters in a field.

Figure 1: Typical view of a maize field, showing two crop-rows interspersed with both grasses and broad-leaf weeds

Precision weeding using machine vision

We worked with Harper-Adams University, to detect the locations of out-of-row weed clusters from 2D image and GPS data.

Figure 2: 3D reconstruction resulting from a four-source photometric stereo scan of an artificially planted weed bed.

weeds

Our 3D techniques enabled us to determine the structure of the weeds from surface information and identify the locations of the crucial parts of the weed.

The result was efficient, targeted weed killing techniques such as precision spraying or heat-treatment.

CMV methods for high frame-rate 3D detection of broad-leaf and grass weeds in maize crops enable precise determination of weed patch locations. These are then analysed to find the “meristem” (main growing stem) to within 1-2mm.

We conducted feasibility studies for the detection and eradication of broadleaved dock (Rumexobtusifolius) in grass crops. Broad-leaved dock can survive animal digestion, is deep-rooted and can affect the yield of desired crops.

Figure 3: Initial results from a feasibility study looking at dock-leaf detection in grass crops

weeds in digital

Initial results are promising and we are interested in forming a consortium with a view to exploring this further and developing it into a fully automated robotic system.

Face recognition using photometric stereo (Photoface)

The PhotoFace project covers two EPSRC funded grants that started in April 2007.

Aims

  • Use high-speed photometric stereo to rapidly capture facial geometry.
  • Capture a new 3D face database for testing within the project and for the benefit of the worldwide face recognition research community.
  • Apply novel and existing state-of-the-art face recognition algorithms to the dataset.
  • Capture skin reflectance data in order to generate synthetic poses of any face captured by the device.

Project stages

Stages one to three were researched in partnership with the Communications and Signal Processing Group at Imperial College London and the Home Office Centre for Applied Science and Technology, General Dynamics UK.

For stage four, we worked with the University of Central Lancashire.

1. Face reconstruction

View the face reconstruction device we constructed. As detailed in our 2008 CVIU paper, both visible and near infrared lights are feasible solutions and the latter gives marginally superior reconstructions.

We operated with five light sources and a camera operating at 210fps. The total capture time is or of the order 30ms with high-speed synchronisation based on Field Programmable Gate Array (FPGA)Technology.

The figure below shows an example of a raw image set recovered using the device.

 lighting variation

Application of Lambertian photometric stereo then gives the following field of surface normals:

 Normals

The figures below show the result of integrating the surface normals to recover a depth map. The second image shows the result of warping one of the raw images onto the surface.

 3D image

 

 

3D image with overlay

2. Photoface database

One area of particular interest was the construction of a database of raw face images. This unique database is very different from existing databases in several respects:

  • Each record consists of four images of a face, corresponding to each of the light sources of our photometric stereo rig.
  • The volunteers were imaged on many occasions over a period of months, allowing extensive testing of our new methods as people change over time. These changes may be due to expression/mood, pose, hair (including facial hair), tanning, injury.
  • The data was collected from a real working environment (General Dynamics UK, South Wales), rather than in controlled laboratory settings. This is in line with the laboratory’s aim to develop machine vision techniques for real-world applications.

This unique 3D face database is amongst the largest currently available, containing 3187 sessions of 453 subjects, captured in two recording periods of approximately six months each.

The Photoface device was located in an unsupervised corridor allowing real-world and unconstrained capture. Each session comprises four differently lit colour photographs of the subject.

From which surface normal and albedo estimations can be calculated (photometric stereo MATLAB code implementation included). This allows for many testing scenarios and data fusion modalities.

Eleven facial landmarks have been manually located on each session for alignment purposes.

Additionally, the Photoface Query Tool is supplied (implemented in MATLAB). This allows for subsets of the database to be extracted according to selected metadata, such as gender, facial hair, pose, expression.

The Photoface database is available to download for research purposes. Please see our 2011 CVPR Workshop paper or email Gary Atkinson at gary.atkinson@uwe.ac.uk.

3. Face recognition

This part of the project aimed to optimise recognition algorithms to the acquired data and considered effects such as the:

  • specific reconstruction methods that optimise recognition rates
  • inclusion of advanced photometric stereo methods (e.g. to account for shadow and specularity)
  • choice subspace mapping.

Further work concentrated on novel methods for dimensionality reduction for face recognition. This involved the use of a psychologically inspired approach to isolate specific pixels within the face and optimal resolutions that are used by humans and emulate this using machine vision - see BMVC 2011 workshop paper.

We also discovered that surface normals are particularly well compressed using the ridgelet transform, whilst maintaining highly discriminating information. Indeed, we achieved 100% recognition with this approach on major subsets of our database, as reported in our Pattern Recognition 2012 paper

Finally, in collaboration with the University of Bath, we designed a recognition algorithm based on the nose ridge shape.

4. Reflectance analysis

For some applications, it may be useful to compare 3D (or 2.5D) data to 2D images. In these cases it is necessary to use the 2.5D data to render images that have matching illumination conditions to the 2D images. A video illustrating our ability to re-render images in this way can be viewed in either, a greyscale AVI video, or a colour AVI video.

These illustrate the usefulness of the reflectance data that emerges from photometric stereo - namely the surface albedo map. Our next project, EPSRC looked to reliably capture Bidirectional Reflectance Distribution Function (BRDF) data for each face scanned by the system. This can then be used to simultaneously render synthetic face images and enhance the quality of the reconstruction. More details to follow.

Grant nos. EPSRC EP/E028659/1, EP/I003061/1

This article is translated to Serbo-Croatian language by Anja Skrba.

4D data capture - real-time 3D

The 4D Vision project aimed to develop 3D Photometric Stereo technology to enable the capture of 3D faces in real time.

We hoped to develop new imaging capabilities for high-speed and resolution capture of facial movements. Combined with robust multi-resolution analysis, realistic visualisation and fast interaction.

4D capture is fundamental in a number of tasks such as detection of deceptive behaviour, and realistic modelling of facial expression for gaming characters.

This project was supported by HEFCE QR funding.

Aims

  • Build a system for generic 3D facial macro- and micro-movement capture.
  • Real-time reconstruction of moving shapes and 3D texture information at unprecedented sub-pixel resolution.
  • Realistic rendering of moving faces with real-time visualisation and interaction.
  • Create and maintain open access a research database showing specific interaction sequences of facial movements.
  • Design and implement novel automated feature extraction and classification methods, that exploit the spatio-temporal information of moving 3D faces and allow fast detection of macro and micro-movements.
  • Generate a new facial expression taxonomy using a robust and accurate model to map facial micro and macro-movements to corresponding expressions.

Real-time capture and recovery

The capture and recovery of moving 3D faces in real-time involves the transfer and processing of high volumes of data. To address this, we used a special combination of hardware and software expertise. All reconstruction and analysis processing ran in parallel on a combined CPU and GPU processor.

In addition to enabling high-performance recognition capabilities on moving 3D data, this approach had the advantage of using development platforms that supported portable applications.

4D rig

Above: 4D rig in use with light sources indicated.

Facial expression modelling and classification

Facial expression recognition is commonly undertaken within the 2D imaging domain. Developments at CMV, Visualisation of the 4D conceptsuch as Photoface and the 4D Vision project, allowed analysis of dense 3D surface information in both static and dynamic ways, respectively.

These systems employ a classification method which is both pose and illumination invariant, hence overcoming the limitations of 2D approaches.

Unlike other commonly used 3D capture techniques, photometric stereo provides dense high frequency spatial information. This captures of fine details, such as wrinkles and transient furrows.

This high density information also enables the extraction of curvature based features. Through statistical feature selection and SVM-based classification, we were able to classify facial expressions with accuracy.

Read New Scientist's article on our 4D capture project.

Application of photometric stereo in dermatology

We adopted a computational approach to redesign a device that primarily inspects the skin for dermatological uses.

Our device is capable of capturing images of the skin, generating 3D views to display on a screen and analyse the images, to detect the presence of skin cancer - specifically malignant melanoma.

The device should be able to help experts differentiate between malignant melanoma and benign skin lesions.

We built and designed it at the CMV. The housing contains a camera and six LED light sources. A separate image is captured with each of the LEDs independently illuminated.

The proposed method for classifying the skin lesion (malignant or benign) is summarised in a flow diagram as follows.

flow diagram

Photometric stereo

The photometric stereo stage of the algorithm is performed using the complete colour matrix equation:

equation

The tool is used to inspect the skin via local or remote terminals. It produces images through a combination of photometric stereo, bump map generation and perspective projection.

An example of the captured image set is shown below. Application of photometric stereo methods yields the results shown to the right of the figure, ie the field of surface normals, the albedo map and the bump map.

lesion methodology

Feature extraction

Characterizing the skin by using images obtained with the device is an indispensable intermediate step in this project.

Good features extracted from geometric or colour information should be invariant or almost invariant to object position and pose, lighting condition and camera setup.

Features might include asymmetry, border, colour variation and diameter (ABCD).

As an example of feature extraction consider the two figures below. To the left is a picture of a real skin sample. The right shows a skin pattern isolated using image processing techniques.

skin1Skin map

Classifier building

Building a classifier combines several heuristic rules used by the dermatologists to identify malignant melanoma.

ABCD rules might be simple and inaccurate if treated separately. Although, combining them together using techniques like boosting, the final classifier would be highly accurate and has a good generalisation capability.

The image below illustrates an effort to isolate suspicious regions of a lesion. In future, these regions will be detected and analysed robustly, using our classifier algorithm.

Suspect areas

Novel non-invasive assessment of respiratory function (NORM)

NORM was an NHS National Institute of Health Research project, funded under the Invention for Innovation (i4i) scheme, and began in June 2009. It was a short term (one year) feasibility study.

Background

Respiratory function testing is uncommon among patients across all ages - no appropriate assessment tool exists.

For children with respiratory or fatigue related disorders (eg muscular dystrophies), effort dependent procedures requiring coordination and cooperation can be challenging. Preventing accurate monitoring of the disease process.

Bedside methods often rely on methodologies prone to error. There is currently no non-invasive accurate system, requiring minimal cooperation to monitor or assess respiratory function applicable to all ages. This impedes studies of directly comparable (as opposed to directly related) data from childhood to adulthood.

Aims

We are developing a novel, non-invasive method for non-contact assessment of respiratory muscle function. Through monitoring changes in the three-dimensional surface details of the human torso in real time.

An optical system captures and tracks all motion details of the chest and abdomen walls, recording 3D shape and dimensional variation of the body dynamically during breathing.

A model is being developed to correlate measurement data with respiratory muscle function. The system is initially designed for the use on adult patients and could be adapted for use on children (after appropriate following-up research and testing).

The system could be used to monitor or diagnose neurological, muscle motion and respiratory system disorders.

Current Development

Lighting characterization

Angular distribution

System setup

NORM photo

Data acquisition software

NORM interface

Potential impact

For the NHS, it could be:

  1. Cheaper; no consumables, dedicated lab space or on-site technical support.
  2. Allows specialist treatment assessment/monitoring in community.
  3. Simple to operate, enables telemedicine support by specialist centre.
  4. Continuous monitoring allows more detailed assessment.
  5. Local use reduces hospital appointments, saving time and costly investigations, eg polysomnography.
  6. Better assessment of respiratory deterioration allows more timely preventive/rescue therapy; reduced long term NHS demands.

The impact on patients may be:

  1. Reduced stress.
  2. Local use – no need to attend specialist centre.
  3. Improves diagnostic timeliness/accuracy, impacting on health/recovery; no mask, mouthpiece, or volitional component, so suitable for younger patients.
  4. Non-invasive, suitable for monitoring in critical care.

Contact

For further information, please contact Lyndon Smith via email at lyndon.smith@uwe.ac.uk or on +44(0)117 328 2009.

Stealthy object detection and recognition

We developed a portable device to automatically detect and recognise potential threats to troops in war zones.

Our idea won funding to be developed into a prototype, in conjunction with partners SEA (Group) Ltd. The idea was put forward by our team of Machine Vision experts, led by Prof Melvyn Smith. It could help soldiers detect camouflaged objects or people and could enhance and recognise the shapes of 3D objects such as guns or explosives hidden under clothing.

The system, based on our expertise in photometric stereo techniques, reveals and enhances subtle shapes and surface details that may not be apparent or are deliberately concealed. Photometric stereo produces a composite image using light from at least three sources linked to a computer to derive detailed information about an object's surface.

The Technical Director of SEA's Defence Division, Peter Cooper, said "Different configurations of the portable device could be used in different task scenarios, for example a compact wearable version could be developed for work at close range, or a portable system for operation by several personnel over greater distances in the field. We look forward to working with UWE on this challenging project."

The MOD received 467 entries for its Competition of Ideas, over half of which came from universities and small or medium enterprises. Sixty-six of the proposals - about one in seven - were successful and of these 22 contracts were awarded to universities. In all, these projects represent an investment of about £11 million into new ideas to enhance the UK's defence technology strategy.

 Stealthy plane 1Stealthy plane 2Stealthy plane 3

As a demonstration of the technique, consider the first image above. This shows a model aeroplane placed on a planar surface with 2D images of the plane. When viewed from above (second image), it is difficult to identify the real camouflaged object from the background. Photometric stereo however, reveals the 3D structure of the scene, thus highlighting the real object.

As a second example, consider the images below. The left-hand image shows a normal photograph of several camouflaged weapons. Using our method, the shape of the items can be enhanced to clearly reveal the location and class of the concealed items (right).

weapons 1weapons 2

Quality control of specular ceramic materials

Industry is lacking a method for the rapid and automated inspection of complex, glossy goods, especially if on-line, eg moving at high speed. These products still need to be inspected manually, which is labour-intensive, monotonous and expensive.

Most existing on-line inspection systems for specular surfaces only deliver qualitative results such as size, orientation or basic geometrical measures. Other powerful systems can only be applied to smooth, non-complex surfaces.

This PhD project aimed to develop a method and device to rapidly reverse engineer specular surfaces while they are on-line. The outcome is a device to generate a full representation of the surface geometry.

Additionally, an earlier implementation will for the first time be able to qualitatively flag the presence of defects, even for complex surfaces with high normal angles.

While specular ceramic tiles are employed as example application, the results of our work is directly applicable to all surfaces showing specular characteristics, such as metals, plastics and polished or lacquered materials.

We addressed the problem by examining what we coined the specular signature. This is the reflection of a line laser of the surface in question, made visible on a translucent screen. It contains all information of the surface normals and magnifies the tiniest defects. Unfortunately, any spatial information is lost. Standard laser triangulation on the other hand is highly inaccurate for specular surfaces but preserves spatial information. Our unique device fuses these two independent measuring techniques and, for the first time, will allow for the fast, objective and repeatable quality control of complete batches of specular objects.

Specular signatures

Figure one: Example of 10 superimposed specular signatures with 0.75 mm offset. Visible is a change in the surface profile and a medium sized defect on the centre right.

The specular signature is captured and dynamically thresholded. A specially developed, novel, real-time, multi-scale line tracing algorithm is then used to extract a maximum of information of it. Afterwards, suspicious regions that point towards abnormalities can be identified.

Simultaneously, standard laser triangulation with centre of gravity peak detection is applied and a likely signature is calculated. At the moment, we are developing ways to effectively fuse the signatures together.

flowchart

Figure two: Operation chart: Specular and estimated signature are merged to give a full surface representation.

device

Figure three: A picture of the device. Two cameras watch the line laser's profile and the signature while the probe is moving

Using 3D facial asymmetry in better diagnosis and treatment of plagiocephaly

A Medical Research Council (MRC) project to research skull abnormalities in children. In collaboration with North Bristol NHS Trust and the London Orthotic Consultancy.

We used innovative 3D imaging techniques to accurately measure the faces and heads of groups of children to look for links between abnormality in head shape and subtle signatures present in facial features.

Our principal focus is a type of cranial disorder known as positional plagiocephaly. Where the two sides of the skull develop inconsistently, so that the shape of the head has an asymmetric, flattened or of other abnormal shape.

Previous research has demonstrated a possible link between deformational plagiocephaly and facial asymmetry. The number of babies diagnosed with plagiocephaly has recently risen sharply from 1 in 300 to about 1 in 60.

When positional plagiocephaly is developing, the anterior portion of one side of the skull and the posterior portion of the opposite side do not grow equally as counterparts. This makes the structure of the skull asymmetric and consequently distorts the shape of the face.

The project has used the Extended Gaussian Image (EGI) representation of shape in order to classify both the extent of cranial deformation and the asymmetry present in a face.

Based on 70 detailed scans, early results show definite signs of correlation between the two metrics and has been able to quantify the improvements made by current treatment techniques.

This project built on the 3D face data capture project, as part of the EPSRC PhotoFace project. The hope is that the work may eventually help to result in better diagnosis and treatment of plagiocephaly.

It is expected that developing a practical method for assessing 3D face shape and symmetry will also have wider applications, for example in assessing stroke patients or evaluating the surgical outcomes of various facial reconstructive procedures, including cleft palate.

MRC grant no. 85543

Back to top