A brief introduction to extracting information from images
1. A brief introduction to extracting information from images Jonathon Hare University of Southampton
2. What can images tell us? How are images represented in digital computers How do we extract information from images Examples of some different extraction techniques Analogies with text Free software! Contents
13. Feature Extraction f(x) Feature extraction is the process of extracting “descriptors” from an image. Descriptors describe some aspect of the image content. Typically, a descriptor is a numerical vector called a “feature vector”, however other forms of descriptor are possible.
14. Higher-level features Directly interpretable by humans i.e. the number of faces in the image Either hand-crafted or trained with machine learning techniques Lower-level features Much more abstract; convey a notion of the image content i.e. the colour distribution of the image IMAGE Feature Morphology
16. The detection of faces in an image is a very useful feature for inferring information about an image Face detection is the first step of face recognition The most popular face detection algorithm is the “Viola-Jones” detector Conceptually simple Uses machine learning; Requires training (slow). Very fast detection High-level features: face Detection
17. Viola-Jones face Detection Bank of filters. Consider all possible position, scale and type parameters (very large numbers of features) For each feature create a simple (weak) binary classifier (a stump) Use ADABOOST to select the informative features P. Viola, M. Jones, Robust Real-Time Face Detection, IJCV, Vol. 57(2), 2004. (first version appeared at CVPR 2001)
18. Viola-Jones face Detection P. Viola, M. Jones, Robust Real-Time Face Detection, IJCV, Vol. 57(2), 2004. (first version appeared at CVPR 2001)
19. Photographers use the “rule-of-thirds” to improve the composition of their photos. The basic idea is to place main subjects at roughly one-third of the horizontal or vertical dimension of the photograph. High-level features: Composition
20. High-level features: Composition It is possible to design features that look for the presence of composition using the rule-of thirds image saliency map segments + saliency map distance to closest power-point area of segment * saliency of segment Che-HuaYeh, Yuan-Chen Ho, Brian A. Barsky, and Ming Ouhyoung. "Personalized Photograph Ranking and Selection System". In ACM Multimedia 2010, pages 211–220, October 2010.
25. Global features describe the content of an entire image One of the simplest global features is the “Global RGB Colour Histogram” Quantise each pixel into a discrete number of colours and then build a histogram. Low-Level Features: Global
26. Global features are useful for some tasks, but in many cases are not powerful enough Local features attempt to overcome this by breaking the image into smaller parts from which to extract features Three primary techniques for splitting up the image Low-level features: Local segmentation salient regions & interest points grids & blocks
27. Salient interest regions and their associated features are currently the most popular way of describing an image content. Extracting image features using interest regions is a two-part process: Find regions Extract feature to describe region properties Typically, the resultant image feature will have a variable length, dependent on the number of regions Salient interest regions
28. Important regions portray: Repeatability Saliency Corners and blobs have these qualities Detectable using various techniques Difference of Gaussian - corners Harris corner detector - corners MSER - blobs Salient interest region Location corners blobs
29. Good region descriptors portray: Resilience to image transforms Compactness Emphasise different image characteristics: Pixel intensities, colour, texture, edges etc. Common descriptors include: SIFT: histogram of edge orientation Shape context: histogram of edge location Salient interest region descriptors
32. In the computer vision community over recent years it has become popular to model the content of an image in a similar way to a “bag-of-terms” in textual document analysis. Bags of Visual Words
33. Features localised by a robust region detector and described by a local descriptor such as SIFT. A vocabulary of exemplar feature-vectors is learnt. Traditionally through k-means clustering. Local descriptors can then be quantised to discrete visual terms by finding the closest exemplar in the vocabulary. BoVW using local features
34. BOVW models have many applications Auto-annotation and object recognition Concept classification Large-scale indexing Applications of BOVW
35. open-source tools for image analysis and indexing introducing openimaj & imageterrier
36. http://www.openimaj.org Open-source (BSD Licence) libraries and tools for multimedia (image, video, sound) analysis and information extraction Implemented in Java; use with any JVM language Implementations of all the techniques mentioned in this tutorial Scalability of extraction using Hadoop with the included tools
37. http://www.imageterrier.org Extension to the Terrier retrieval system to allow indexing of images Collections and documents that read data produced from image feature extractors. New indexers and supporting classes to make compressed augmented inverted indices for visual term data. New distance measures implemented as WeightingModels. Geometric re-ranking implemented as DocumentScoreModifiers. Command-line tools for indexing and searching. Freely available under the Mozilla Licence
Hinweis der Redaktion
Reuters got in some trouble because of image manipulation recently, and this resulted in a backlash in the press. There is a blog “photoshop disasters” with many examples of tampering; here are just a few...
This is a case of image tampering in an image published Reuters and later withdrawn by Reuters. The image depicts Beirut after an Israeli air strike. The tampering makes the scene look worse than it perhaps was. The use of the clone tool is quite evident however.August 2006: This photograph by Adnan Hajj, a Lebanese photographer, showed thick black smoke rising above buildings in the Lebanese capital after an Israeli air raid. The Reuters news agency initially published this photograph on their web site and then withdrew it when it became evident that the original image had been manipulated to show more and darker smoke. "Hajj has denied deliberately attempting to manipulate the image, saying that he was trying to remove dust marks and that he made mistakes due to the bad lighting conditions he was working under", said Moira Whittle, the head of public relations for Reuters. "This represents a serious breach of Reuters' standards and we shall not be accepting or using pictures taken by him." A second photograph by Hajj was also determined to have been doctored.** The picture on the left was created around 1864 - it is supposed to depict Ulysses S. Grant in front of his troops not far from here, at City Point Virginia. Unfortunately, this is an example of an early forgery; the rider on the horse, is actually Major General McCook. McCook and his horse have been superimposed on the image an image of confederate prisoners at Fishers Hill, and Grant’s head on top of this!circa 1864: This print purports to be of General Ulysses S. Grant in front of his troops at City Point, Virginia, during the American Civil War. Some very nice detective work by researchers at the Library of Congress revealed that this print is a composite of three separate prints: (1) the head in this photo is taken from a portrait of Grant; (2) the horse and body are those of Major General Alexander M. McCook; and (3) the background is of Confederate prisoners captured at the battle of Fisher's Hill, VA.
So, images can be tampered with, but is there any way to detect this automatically? There is a whole research field based around the idea of forensic techniques. Here are two examples of the kind of automatic forensic processing that is possibleCloning parts of images to hide something is common. In this case the original picture showed George bush on a lectern. Automatic analysis is able to detect the manipulations.