Digital Futures 2011: Symposium Review

Date: Tuesday 1 November 2011
Location: Institute of Physics, 76 Portland Place, London W1B 1NT
Organised by: The Imaging Science Group of the Royal Photographic Society, UK, with the cooperation of the Society of Imaging Science & Technology, USA
Chairs: Sophie Triantaphillidou, University of Westminster
Alan Hodgson, 3M Security Printing and Systems Ltd

Overview of the programme

N.B. The full programme for this event can be seen on the DF2011 page.

This meeting attracted 42 delegates spread over a wide field. We had students and emeritus professors, industry and academics. Occupations ranged over surgeons, colour and imaging scientists through to camera systems manufacturers.

The programme consisted of 11 presented papers plus 3 coffee breaks and lunch to encourage effective networking. Sophie Triantaphillidou chaired the morning session and Alan Hodgson the afternoon session.

A number of presentations were illustrated using projected images viewed using anaglyph (red/cyan) glasses.

A short introduction to the Imaging Science Group was given by the Group Chair, Tony Kaye. The summaries below are not necessarily in the order in which they were read at the conference. The preliminary programme can be found here. The plan is to add further content as these are made available by the presenters. The event was also video recorded by River Valley and the content should also be made available soon.

Presentations from the University of St. Andrews, UK

We had 2 very interesting papers from this university.

Prof. Julie Harris presented a paper entitled “Why do we have difficulty with depth perception in complex environments?”. The presentation started with a summary of binocular disparity, moving on to illustrate with images how perceived image depth is can vary as some function of image content. A description of the visual cortex of the brain followed leading to the conclusion that this area of the brain appears to do an image processing cross correlation between left and right eye to extract depth cues. This cross correlation appears to be done on different scales, leading to the observation that if fine scale regions have large depth disparities these will not be highly visible – the neurones visualising small scales only process small depth disparities.

The second presentation from Dhanraj  Vishwanath was entitled “Explaining the phenomenal experience of stereopsis”. Dhanraj  defined stereopsis as the perceptual by-product of binocular depth perception. One theory is that this comes from 2 potential routes, either the simultaneous apprehension of 2 different aspects of an object (binocular vision) or sequential apprehension of 2 or more aspects of an object (motion parallax). However, this work seems to illustrate that stereopis has more to do with the perceived scale of the objects.

Further work on the perception of depth came from Shih-Chueh Kao, a PhD student in the School of Design at the University of Leeds, UK. His paper entitled “The adjustments of colour saturation for stereoscopic 3D perception”. These early results appear to show that colour does appear to give a depth cue to 3D images. It was noted that a gradient in saturation (e.g. red to gray) is particularly effective.

Camera systems

We had 2 interesting papers on this topic.

The first was given by David Christian from the University of Glamorgan, UK. Glamorgan have constructed a 3D camera known as Mavis II. The problem appears to be processing the stereo images to produce a 3D surface map. David’s work showed the advantages of using the dedicated and parallel processing capabilities of computer graphics cards to do this processing. Compared to a conventional PC this is capable of generating significant processing speed enhancements. For a PDF of this presentation, see Optimisation of Stereo Image Processing by David Christian.

The second paper came from Graham Kirsch from Aptina UK entitled “Frame rate invariant feature extraction for 3D ranging”. Graham described the use of interest points, accurately located, reproducible features extracted from various images of an object. The regions surrounding these points are described as descriptors. The recognition of interest points and descriptors are costly to calculate in terms of processing time so a hardware solution is appropriate. Aptina are currently developing a hardware solution for this which is around 1 year off.

Medical imaging applications

There followed 3 papers on medical applications of stereoscopic imaging.

Hoosain Ebrahim of the University of Limpopo, South Africa gave us “A concise chronicled overview of stereoscopic imaging and its contribution to medical and forensic science”. This usefully gave us a broad overview of stereo imaging in general and the use in medicine in particular.

Justus Ilgner of RWTH Aachen University Hospital, Germany talked on “Practical aspects on stereoscopic live video imaging of operative procedures”. It was fascinating to see the practical use of this technology in the operating theatre.

Finally Ralph Smith from the Minimal Access Therapy Training Unit (MATTU) of the University of Surrey, UK presented “Stereoscopic surgical display systems enhance performance of basic visuomotor manoeuvres during the acquisition of minimally invasive surgical skills” showing trials of 3D and 2D imaging in keyhole surgery. Now that high resolution screens have been adopted in operating theatres MATTU is carrying out a study to see if increased depth perception from 3D images can improve surgeons’ “performance”. So far the results have been promising.

Stereoscopic imaging technology

After his article in Physics World on the same topic I was really looking forward to hearing Jonathan Mather of Sharp Laboratories of Europe, UK talking on “Designing the best 3D display”. Jonathan works on 3D displays. He presented a spectrum of potential 3D devices of different levels of technologies.

Alan Cooper of the Stereoscopic Society, UK talked on the “Evaluation of 3D”. Some of the software used to generate the images was demonstrated together with some “golden rules” for image generation. All illustrated with some excellent imagery!

Finally, Lindsay MacDonald, now at University College London, UK demonstrated “Photometric stereo – Depth from a single camera position”. This compared and contrasted different methods of generating 3D imagery from cultural heritage objects.

All in all, a fascinating and worthwhile day!

Event report by Alan Hodgson