4. Image Fusion & ReconstructionImage Fusion & Reconstruction
• Single photo:Single photo: forces narrow tradeoffs:forces narrow tradeoffs:
– Focus, Exposure, aperture, time, sensitivity, noise,Focus, Exposure, aperture, time, sensitivity, noise,
– Usual result: Incomplete visual appearance.Usual result: Incomplete visual appearance.
Multiple photosMultiple photos, assorted settings, assorted settings
for Optics, Sensor, Lighting, Processingfor Optics, Sensor, Lighting, Processing
• Fusion:Fusion:
‘Merge the best parts’‘Merge the best parts’
• Reconstruction:Reconstruction:
Detect photo changes;Detect photo changes;
compute scene invariantscompute scene invariants
5. High Dynamic Range ImagingHigh Dynamic Range Imaging
• Cameras have limited dynamic range
Small Exposure image, dark inside
1/500 sec
Large exposure image, saturated outside
¼ sec
Images from Raanan Fattal
6. High Dynamic Range ImagingHigh Dynamic Range Imaging
• Combine images at different exposures
• Exposure Bracketing
• [Mann and Picard 95, Debevec et al 96]
Images from Raanan Fattal
7. How could we put all this
information into one
image ?
8. Tone Map 20 bit image for 8 bit DisplayTone Map 20 bit image for 8 bit Display
10. Impact of Blur and Halos
• If the decomposition introduces blur and
halos, the final result is corrupted.
Sample manipulation:
increasing texture
(residual × 3)
16. p
Our Strategy
Reformulate the bilateral filter
– More complex space:
Homogeneous intensity
Higher-dimensional space
– Simpler expression: mainly a convolution
Leads to a fast algorithm
weights
applied
to pixels
17. Attenuate High GradientsAttenuate High Gradients
I(x)
1
105
1
Intensity
I(x)
1
105
Intensity
Maintain local detail at the cost
of global range
Fattal et al Siggraph 2002
18. Attenuate High GradientsAttenuate High Gradients
I(x)
1
105
G(x)
1
105
Intensity Gradient
I(x)
1
105
Intensity
Maintain local detail at the cost
of global range
Fattal et al Siggraph 2002
19. Attenuate High GradientsAttenuate High Gradients
I(x)
1
105
G(x)
1
105
Intensity Gradient
I(x)
1
105
Intensity
Keep low gradients
Fattal et al Siggraph 2002
21. Gradient Domain CompressionGradient Domain Compression
HDR
Image L Log L
Gradient Attenuation
Function G
Multiply
2D
Integration
Gradients
Lx,Ly
22.
23. Grad X
Grad Y
New Grad X
New Grad Y
2D
Integration
Intensity Gradient ManipulationIntensity Gradient Manipulation
Gradient
Processing
A Common Pipeline
This
Section
Next
Section
24. Grad X
Grad Y
New Grad X
New Grad Y
2D
Integration
Gradient
Processing
25. Local Illumination ChangeLocal Illumination Change
Original gradient field:
Original Image: f
*
f∇
Modified gradient field: v
Perez et al. Poisson Image editing, SIGGRAPH 2003
28. Intensity Gradient VectorIntensity Gradient Vector ProjectionProjection
[Agrawal, Raskar, Nayar, Li SIGGRAPH 2005][Agrawal, Raskar, Nayar, Li SIGGRAPH 2005]
29. Intensity Gradient Vectors in Flash and Ambient ImagesIntensity Gradient Vectors in Flash and Ambient Images
Same gradient
vector direction Flash Gradient Vector
Ambient Gradient Vector
Ambient Flash
No reflections
42. Nighttime imageNighttime image
Daytime imageDaytime image Gradient fieldGradient field
ImportanceImportance
image Wimage W
FinalresultFinalresult
Gradient fieldGradient field
Mixed gradient fieldMixed gradient field
GG11 GG11
GG22 GG22
xx YY
xx YY
II11
I2
GG GG
xx YY
43. Reconstruction from Gradient FieldReconstruction from Gradient Field
• Problem: minimize error |∇ I’ – G|
• Estimate I’ so that
G = ∇ I’
• Poisson equation
∇ 2
I’ = div G
• Full multigrid
solver
I’I’
GGXX
GGYY
54. • No Flash:No Flash: Candle warmth, but high noiseCandle warmth, but high noise
• Flash:Flash: low noise, but no candle warmthlow noise, but no candle warmth
Photography: Full of Tradeoffs...Photography: Full of Tradeoffs...
No-flash Flash
55. Image A: Warm, shadows, but too Noisy
(too dim for a good quick photo)
No-flash
56. Image B: Cold, Shadow-free, Clean
(flash: simple light, ALMOST no shadows)
57. MERGE BEST OF BOTH: apply
‘Cross Bilateral’ or ‘Joint Bilateral’
59. Image Fusion & ReconstructionImage Fusion & Reconstruction
• Single photo:Single photo: forces narrow tradeoffs:forces narrow tradeoffs:
– Focus, Exposure, aperture, time, sensitivity, noise,Focus, Exposure, aperture, time, sensitivity, noise,
– Usual result: Incomplete visual appearance.Usual result: Incomplete visual appearance.
Multiple photosMultiple photos, assorted settings, assorted settings
for Optics, Sensor, Lighting, Processingfor Optics, Sensor, Lighting, Processing
• Fusion:Fusion:
‘Merge the best parts’‘Merge the best parts’
• Reconstruction:Reconstruction:
Detect photo changes;Detect photo changes;
compute scene invariantscompute scene invariants
60. The Media Lab Camera Culture
Epsilon Photography
Capture multiple photos, each with
slightly different camera parameters.
• Exposure settings
• Spectrum/color settings
• Focus settings
• Camera position
• Scene illumination
77. What else can we extend?What else can we extend?
Film-Like Camera Parameters:Film-Like Camera Parameters:
• Field of View: image stitching for panoramasField of View: image stitching for panoramas
• Dynamic Range:Dynamic Range: Radiance MapsRadiance Maps
• Frame Rate: Interleaved VideoFrame Rate: Interleaved Video
• Resolution: ‘Super-resolution’ methodsResolution: ‘Super-resolution’ methods
Visual Appearance & Content:Visual Appearance & Content:
• Tone Map:Tone Map: Detail in every shadow and highlightDetail in every shadow and highlight
• Color2grey:Color2grey: KeepKeep allall color changes in grayscalecolor changes in grayscale
• Temporal Continuity: Space-time fusionTemporal Continuity: Space-time fusion
• Viewpoint Constraints:Viewpoint Constraints:
Multiple COP imagesMultiple COP images
and more…and more…
78. The Media Lab Camera Culture
Epsilon Photography
Capture multiple photos, each with
slightly different camera parameters.
• Exposure settings
• Spectrum/color settings
• Focus settings
• Camera position
• Scene illumination
84. Vein ViewerVein Viewer (Luminetx)(Luminetx)
Near-IR camera locates subcutaneous veins and projectNear-IR camera locates subcutaneous veins and project
their location onto the surface of the skin.their location onto the surface of the skin.
Coaxial IR cameraCoaxial IR camera
+ Projector+ Projector
Coaxial IR cameraCoaxial IR camera
+ Projector+ Projector
87. Varying PolarizationVarying Polarization
Yoav Y. Schechner, Nir Karpel 2005Yoav Y. Schechner, Nir Karpel 2005
Best polarization state
Worst polarization state
Best polarization
state
Recovered
image
[Left] The raw images taken through a polarizer. [Right] White-balanced results:
The recovered image is much clearer, especially at distant objects, than the raw image
88. Varying PolarizationVarying Polarization
• Schechner, Narasimhan, NayarSchechner, Narasimhan, Nayar
• Instant dehazingInstant dehazing
of images usingof images using
polarizationpolarization
92. MIT Media LabMIT Media Lab
Camera CultureCamera Culture
Ramesh RaskarRamesh Raskar
MIT Media LabMIT Media Lab
http:// CameraCulture . info/http:// CameraCulture . info/
Computational Camera &Computational Camera &
Photography:Photography:
Computational Camera &Computational Camera &
Photography:Photography:
98. Ramesh Raskar, Karhan Tan, Rogerio Feris,
Jingyi Yu, Matthew Turk
Mitsubishi Electric Research Labs (MERL), Cambridge, MA
U of California at Santa Barbara
U of North Carolina at Chapel Hill
Non-photorealistic Camera:Non-photorealistic Camera:
Depth Edge DetectionDepth Edge Detection andand StylizedStylized
RenderingRendering usingusing
Multi-Flash ImagingMulti-Flash Imaging
106. The Media Lab Camera Culture
Epsilon Photography
Capture multiple photos, each with
slightly different camera parameters.
• Exposure settings
• Spectrum/color settings
• Focus settings
• Camera position
• Scene illumination
107. The Media Lab Camera Culture
Lens Sensor
Camera
Static
Scene
Image Destabilization
[Mohan, Lanman et al. 2009]
108. The Media Lab Camera Culture
Static
Scene
Lens Motion Sensor Motion
Camera
Image Destabilization
[Mohan, Lanman et al. 2009]
Talk about limitations: Colocated artifacts, color coherency, ref can’t be obtain by subtraction
<Algorithm> The algorithm consists of the following steps: First, we compute the gradient fields of the daytime and nighttime input images by using simple forward differencing in the x and y direction. Thresholding the gradient field images allows us to compute the locally-important areas. These are areas of high variance in the nighttime image, shown here in white. The pixels of the importance image are white in the locally-important areas that are taken from the nighttime image and black in the context areas that are taken from the daytime image. We use prost-processing (eroding, fattening and feathering) to consolidate the selected areas and ensure smooth transitions. A mixed gradient field is computed as a weighted mean of the input gradient fields, using the pixel values in the importance image as weights. The final result is obtained by integrating the mixed gradient field.
<Gradient field integration> Image reconstruction from gradients fields is an approximate invertibility problem, and still a very active research area. We are trying to obtain image I from a gradient field G composed of two images that represent the differences in the x and y direction. In 2D, a modified gradient vector field G may not be integrable. We use one of the direct methods recently proposed to minimize the error nabla I - G. The estimate of the desired intensity function I’ , so that G = nabla I’ , can be obtained by solving the Poisson differential equation nabla 2 I’ = divG , involving a Laplace and a divergence operator. We use the full multigrid method to solve the Laplace equation.
When we take a photograph of a group of people, such as this image on the left, what we get is a frozen moment of time that is often less natural, and less attractive than the scene we remember. This is because the cognitive processes that form our visual memories integrate over a range of time to form a subjective impression. This memory will likely look a lot more like the image on the right, where everyone is smiling naturally. The goal of our photomontage system is to help us create photographs that better match the image we see in our mind’s eye. To do so, we begin with a stack of images, and combine the best parts of each to form an image that is better than any of the originals.
The tradeoffs in the CAMERA ADJUSTMENTS dont match the tradeoffs in APPEARANCE of what we want to photograph
Better than any one photo : keep the best from each of them.
Precursor to Google Streetview Maps
Check Steve Seitz and U of Washington Phototourism Page
Full-Scale Schlieren Image Reveals The Heat Coming off of a Space Heater, Lamp and Person
We call our tool NETRA: near eye tool for refractive assessment such as nearsightedness/far/astigmatism Basic idea is to create a unique interactive lightfield display near the eye and is possible due to the highresolution of modern LCDs.
In a confocal laser scanning microscope, a laser beam passes through a light source aperture and then is focused by an objective lens into a small (ideally diffraction limited ) focal volume within a fluorescent specimen. A mixture of emitted fluorescent light as well as reflected laser light from the illuminated spot is then recollected by the objective lens. A beam splitter separates the light mixture by allowing only the laser light to pass through and reflecting the fluorescent light into the detection apparatus. After passing a pinhole , the fluorescent light is detected by a photodetection device (a photomultiplier tube (PMT) or avalanche photodiode ), transforming the light signal into an electrical one that is recorded by a computer.
But if the photographic signal is RAY CHANGES rather than absolute pixel values, it re-opens some long-settled questions in image processing; namely ‘what are the best ways to DEPICT visually significant changes? For example, everyone here knows the visually correct way to convert colors to their equivalent gray value . *BUT NOBODY HERE* (including me) can tell me the one true correct way to convert CHANGES in COLOR to CHANGES in LUMINANCE. There are visually significant CHANGES in color that get LOST when we simply remove the chrominance …