Though revolutionary in many ways, digital photography is essentially electronically implemented film photography. By contrast, computational photography exploits plentiful low-cost computing and memory, new kinds of digitally enabled sensors, optics, probes, smart lighting, and communication to capture information far beyond just a simple set of pixels. It promises a richer, even a multilayered, visual experience that may include depth, fused photo-video representations, or multispectral imagery. Professor Raskar will discuss and demonstrate advances he is working on in the areas of generalized optics, sensors, illumination methods, processing, and display, and describe how computational photography will enable us to create images that break from traditional constraints to retain more fully our fondest and most important memories, to keep personalized records of our lives, and to extend both the archival and the artistic possibilities of photography.
10. Camera Culture Ramesh Raskar Camera Culture Associate Professor, MIT Media Lab Computational Photography Wish List http://raskar.info
11. Synthesis Low Level Mid Level High Level Hyper realism Raw Angle, spectrum aware Non-visual Data, GPS Metadata Priors Comprehensive 8D reflectance field Computational Photography Digital Epsilon Coded Essence Computational Photography aims to make progress on both axis Camera Array HDR, FoV Focal stack Decomposition problems Depth Spectrum LightFields Human Stereo Vision Transient Imaging Virtual Object Insertion Relighting Augmented Human Experience Material editing from single photo Scene completion from photos Motion Magnification Phototourism Resolution
21. Fluttered Shutter Camera Raskar, Agrawal, Tumblin Siggraph2006 Ferroelectric shutter in front of the lens is turned opaque or transparent in a rapid binary sequence
22. Image Deblurred by solving a linear system. No post-processing Blurred Taxi
25. Image-Based Re-lighting Measure incoming light in Milan, Light the actress in LA Matte the background Matched LA and Milan lighting. Debevec et al., SIGG2001
28. Convert LCD into a big flat camera ? Beyond Multi-touch: 3D Gestures
29. Large Virtual Camera for 3D Interactive HCI and Video Conferencing Matthew Hirsch, Henry Holtzman Doug Lanman, Ramesh Raskar Siggraph Asia 2009 BiDi Screen
32. Shallow DoF with Simple Lens Lots of glass; Heavy; Bulky; Expensive
33. Image Destabilization: Programmable Defocus using Lens and Sensor Motion Ankit Mohan, Douglas Lanman, Shinsaku Hiura, Ramesh Raskar MIT Media Lab MIT Media Lab Camera Culture
34. Image De stabilization Lens Sensor Camera Static Scene
89. Photos of tomorrow: computed not recorded http://scalarmotion.wordpress.com/2009/03/15/propeller-image-aliasing/
90.
91. 2 nd International Conference on Computational Photography Papers due November 2, 2009 http://cameraculture.media.mit.edu/iccp10
92.
93.
Hinweis der Redaktion
http://raskar.info http://cameraculture.info Beyond making it faster and cheaper, one can argue that digital photography has barely changed how we capture and share visual experiences.
Kodak DCS400 in Nikon F3 body in early 90âs Commendable first 1.3MP digital but film cartridge still there! (First one in 1991 but even in 1995 the space for cartridge) Quote from Jack Tumblin Digital photography is like a caged lion that is uncaged in a jungle after years .. The lion stays in place rather than rushing out to explore Billion cameras but they all look like human eye KODAK Professional Digital Camera DCS-100: a camera back and camera winder fitted to an unmodified Nikon F3 camera
CPUs and computers donât mimic the human brain. And robots donât mimic human activities. Should the hardware for visual computing which is cameras and capture devices, mimic the human eye? Even if we decide to use a successful biological vision system as basis, we have a range of choices. For single chambered to compounds eyes, shadow-based to refractive to reflective optics. So the goal of my group at Media Lab is to explore new designs and develop software algorithms that exploit these designs.
currently we solve the human visual perception problem by simply reproducing what the eye would see. (even for 3D, we show stereo pair) But this makes it difficult to understand or manipulate for computers. (machine readable rep)
Wishlist by consumers and companies today .. i.e. what is NOT available today but they wish it was So, I am not including Wifi, GPS, face detection etc in the list here. But let us dream beyond this list.
Lets dream big .. Can we look around at something beyond the line of sight?
Can photos become emotive abstract renderings ?
Inference and perception are important. Intent and goal of the photo is important. Computational Photography pioneered by Nayar, Levoy, Debevec, Microsoft Research et al in late-90âs. The same way camera put photorealistic art out of business, maybe this new artform will put the traditional camera out of business. Because we wont really care about a photo, merely a recording of light but a form that captures meaningful subset of the visual experience. Multiperspective photos. Photosynth is an example.
My own wishlist .. By defn these problems are not solved .. But I will try to give you a flavor of things my group and others are doing in that direction
See computationalphotography.org
Simplify capture time decisions Fix everything in post if necessary
Stanford Plenoptic Camera
But you lose a lot of resolution, 16M reduced to 300x300 pixels
We can now create lightfield cameras by mechanism that donât introduce new lenses. http://raskar.info/Mask/
Photo with Mask
Sub aperture views From these 121 views we can create stereo movies, compute depth, achieve digital refocusing etc.
The mask based lightfield camera is more suitable for post-capture control in many ways.
Wish: low light photography to deal with motion blur
License plate example: Blur = 60 pixels Can you guess what the car make is ?
Rudy Burger, âdonât use flash and destroy the imageâ Can we use flash not just for improving scene brightness but for enhancing the mood? Like in studio lights? Main difference between professionals and consumers is lighting.
Debevec et al have shown for special effects terrific relighting strategies. Wish: do the same with a compact light source and exploit natural lighting.
Mechanical size and restrictions. Can we create a flat camera? But still capture enough light ? Current strategy is to shrink the camera in all dimensions: making it flat also means a tiny lens. Origami lens (Montage program, Joe Ford UCSD, ) Bidi Screen Image Destabilization New exciting projects at Stanford (Marc Levoyâs Open Source Camera) Shree Nayarâs BigShot âlego-cameraâ
How to exploit Sharpâs photosensing LCD originally designed for touch sensing and convert into a large area flat camera Since we are adapting LCD technology we can fit a BiDi screen into laptops and mobile devices.
So here is a preview of our quantitative results. Iâll explain this in more detail later on, but you can see weâre able to accurately distinguish the depth of a set of resolution targets. We show above a portion of portion of views form our virtual cameras, a synthetically refocused image, and the depth map derived from it.
Many are working on extending depth of field. But consumers pay quite a bit to actually reduce the depth of field. Can we support both options?
= Material index and compute bounces (real vs fake) = Automatic 3D, phototourism, and 3D awareness (look around a corner) = Find relationship (network) between all photos = Understand the world (recognize, categorize, make world smarter bokode)
= Material index and compute bounces (real vs fake) = Automatic 3D, phototourism, and 3D awareness (look around a corner) = Find relationship (network) between all photos = Understand the world (recognize, categorize, make world smarter bokode)
Life Sharing = Automatic lifelogs and summaries (capture and render from other viewpoints, possibly retinal implants) = Privacy in public (smart probes, good recognition) and authentication = Ultimate photoframes, Print 3D and relightable photos (6D), print any material
Life Sharing = Automatic lifelogs and summaries (capture and render from other viewpoints, possibly retinal implants) = Privacy in public (smart probes, good recognition) and authentication = Ultimate photoframes, Print 3D and relightable photos (6D), print any material
Martin Fuchs, Ramesh Raskar, Hans-Peter Seidel, Hendrik P. A. Lensch Siggraph 2008
This video is only for 4D display that responds to light Bonnyâs lenticular prints outside
Life Sharing = Automatic lifelogs and summaries (capture and render from other viewpoints, possibly retinal implants) = Privacy in public (smart probes, good recognition) and authentication = Ultimate photoframes, Print 3D and relightable photos (6D), print any material
A great artifact in musuem which you can hold and observe Or that ride on a roller coaster Synthesize a new experience = Photoediting using learned models of earlier stroke activities, image filling etc = Artistic effects and NPR (like MS Word, but artistic ability to express) = Blind camera
Inference and perception are important. Intent and goal of the photo is important. The same way camera put photorealistic art out of business, maybe this new artform will put the traditional camera out of business. Because we wont really care about a photo, merely a recording of light but a form that captures meaningful subset of the visual experience. Multiperspective photos. Photosynth is an example.
Alexis Gerard threatens us that he has this camera which is so advanced, you donât even need it .. Maybe all the consumer photographer wants is a black box with big red button. No optics, sensors or flash. If I am standing the middle of times square and I need to take a photo. Do I really need a fancy camera?
The camera can trawl on flickr and retrieve a photo that is roughly taken at the same position, at the same time of day. Maybe all the consumer wants is a blind camera.
Your wish here .. New Digital Imaging Workflow? Emerging opportunities with smarter cameras? Share over lunch, beer and cocktails
photos will be computed rather than recorded Comp photo will be there It will change the workflow, just with digital many pipeline have turned upside down and we will even more At the same time with cameras that understand our world better, there will be a lot of new opportunities http://scalarmotion.wordpress.com/2009/03/15/propeller-image-aliasing/