1. Defocus Magnification SoonminBae & FrédoDurand Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Proceedings of EUROGRAPHICS 2007 Presented by DebaleenaChattopadhyay
2. Presentation Outline What? - The problem definition Why? - The Novelty of the paper How? - The solution to the problem Results - The outcome Discussion - The further scope of enhancement
4. Defocus What is Defocus? – It is the result of causing a lens to deviate from accurate focus. Depth of focus – While bringing a certain object into focus, objects that are away from it (in focus) appear blurred and the amount of blur increases with the relative distances. Defocus and Geometry— This suggests that defocus and geometry (3D orientation of the scene) are related and, therefore, it is possible to estimate the appearance of a scene by measuring the amount of defocus in an image. Defocus Magnification— Magnify the defocus effects within an image i.e. to blur blurry regions and keep sharp regions sharp.
5. SLR vs. Point-and-Shoot SLR cameras can produce a shallow Depth Of Focus that it keeps the main subject sharp but blurs the background. Sharp foreground with blurred background Photo Credit: Bae & Durand
6. A Point-and-Shoot Camera Small point-and-shoot cameras do not permit enough defocus due to the small diameter of their lens and their small sensors. Background is not blurred enough Photo Credit: Bae & Durand
7. Defocus and Aperture size Bigger aperture produces more defocus F-number N gives the aperture diameter A as a fraction of the focal length f (A = Nf ) Example : f = 100 mm, f/2A = 50mm, f/4 A = 25mm f/2 f/4 7 sensor lens focal plane Slide Credit: Bae & Durand
8.
9. Defocus size is mostly proportional to the sensor sizeLarge sensor (22.2 x 14.8), f/2.8 blurred background Small sensor (7.18 x 5.32), f/2.8 background remained sharp Slide Credit: Bae & Durand
10. The Problem Definition To present an image- processing technique that magnifies existing defocus given a single photo. (i.e. to simulate shallow depth of field) Input Image Output Image
23. The filter responses are then tested for reliability using certain thresholds.
24. The right scale for edge detection as defined in the paper is : σ1 = {64 32 16 8 4 2 1 0.5} and σ2 = {32 16 8 4 2 1 0.5} pixels
25. The Solution Blurred Edge Detection Multi-scale edge detector working formulae : The Gaussian Derivative filters The First Order Gaussian Derivative filter with σ1 varying as previously defined scale.
26. The Solution Blurred Edge Detection Multi-scale edge detector working formulae : The Gaussian Derivative filters The Second Order Gaussian Derivative filter with σ2 varying as previously defined scale.
27. The Solution Blurred Edge Detection Multi-scale edge detector working formulae : Reliability Criterion detection working formulae : Reliability of the filter responses is tested against a threshold which is computed as follows (c1 and c2 for the first and the second order Gaussian derivative filters σ1 and σ2 respectively) :
28.
29. our blur measure input The Solution Robust Blur Estimation Successfully measure the blur size in spite of the influence of scene events nearby blurry sharp 23
32. The Solution Refinement of Blur Estimation Erroneous blur estimates due to soft shadows and glossy highlights blurry input blur measure sharp
33.
34. due to soft shadows and glossy highlightsblurry input blur measure sharp 26
35. The Solution Remove Outliers Using cross bilateral filtering [Eisemann 04, Petschnigg 04] a weighted mean of neighboring blur measures. blurry before refinement after refinement sharp
36. The Solution Refine Blur Estimation The biased cross bilateral filtering of a sparse set of blur measures, BM at an edge pixel p is formulated as the following: Where, b(BM)= exp(-BM/2) gσ (x)= exp( -x2/2 σ 2) σb = 10% of the image range σb = 10% of the image size
37. blur measure input The Solution Blur Propagation Given a sparse set of the blur measure (BM) Propagate the blur measure to the entire image Assumption : blurriness (B)is smooth except at image edges Inspired by [Levin et al. 2004]
38. The Solution Blur Propagation Given a sparse set of the blur measure (BM) Propagate the blur measure to the entire image Assumption : blurriness (B)is smooth except at image edges We minimize data term smoothness term proportional toe -|| C(p) – C(q) ||2 αp = 0.5 for edge pixels. 30
39.
40. Recap 1. User provides a single input photograph 2. Our system automatically produces the defocus map 3. We use Photoshop’s lens blur to generate the defocus magnified result input our defocus map Increased defocus 33 Slide Credit: Bae & Durand
So, when a scene is captured as an image i.e. a photograph by a camera, some objects of the scene is in focus while, others are out of focus, i.e. in defocus. Going back to the problem definition let us try to get the motivation behind all this effort. We have quite a subjective impression that we view our surroundings in clear, sharp focus. This relates back to the photographic tradition where more or less the complete image remains in focus i.e., have an infinite depth of field. But this contradicts the biological theory that the images that fall on the retina are typically quite badly focused everywhere except within the central fovea. There is a gradient of focus, ranging from nearly perfect focus at the point of regard to almost complete blur at points on distant objects. This gradient of focus inherent in biological and most other optical systems can be treated as a useful source of depth information, and consequently may be used to recover a depth map (i.e., distances between viewer and points in the scene).
Defocus map i.e. the measure of blurriness in an image or the blur estimated at each of the edges of an image.
The PSF of an optical system is the irradiance distribution that results from a single point source in object space. Although the source may be a point, the image is not. There are two main reasons. First, aberrations in the optical system will spread the image over a finite area. Second, diffraction effects will also spread the image, even in a system that has no aberrations. There is a gradient of focus, ranging from nearly perfect focus at the point of regard to almost complete blur at points on distant objects. This gradient of focus inherent in biological and most other optical systems . The PSF evidently depends on the camera lens properties and atmospheric conditions when the image is captured.
Our edge-detection method depends upon making reliable inferences about the local shape of the intensity function at each point in an image. Reliability is defined in terms of an overall significance level α_I for the entire image and a pointwise significance level α_p.that is, the noise at a given point in the image is a normally distributed random variable with standard deviation sn (sn = 2.5), independent of the signal and the noise at other points in the image.The edge-detection method depends upon making reliable inferences about the local shape of the intensity function at each point in an image.
Our edge-detection method depends upon making reliable inferences about the local shape of the intensity function at each point in an image. Reliability is defined in terms of an overall significance level α_I for the entire image and a pointwise significance level α_p.A weighted sum of these two filter responses is used to compute the gradient direction θ that maximizes the gradi- ent magnitude.
Our edge-detection method depends upon making reliable inferences about the local shape of the intensity function at each point in an image. Reliability is defined in terms of an overall significance level α_I for the entire image and a pointwise significance level α_p.A weighted sum of these two filter responses is used to compute the gradient direction θ that maximizes the gradi- ent magnitude.
Our edge-detection method depends upon making reliable inferences about the local shape of the intensity function at each point in an image. Reliability is defined in terms of an overall significance level α_I for the entire image and a pointwise significance level α_p.A weighted sum of these two filter responses is used to compute the gradient direction θ that maximizes the gradi- ent magnitude.
Our edge-detection method depends upon making reliable inferences about the local shape of the intensity function at each point in an image. Reliability is defined in terms of an overall significance level α_I for the entire image and a pointwise significance level α_p.