13. Synthesis Low Level Mid Level High Level Hyper realism Raw Angle, spectrum aware Non-visual Data, GPS Metadata Priors Comprehensive 8D reflectance field Computational Photography (vs Imaging) Digital Epsilon Coded Essence Computational Photography aims to make progress on both axis Camera Array HDR, FoV Focal stack Decomposition problems Depth Spectrum LightFields Human Stereo Vision Transient Imaging Virtual Object Insertion Relighting Augmented Human Experience Material editing from single photo Scene completion from photos Motion Magnification Phototourism Resolution
15. Image Destabilization: Programmable Defocus using Lens and Sensor Motion Ankit Mohan, Douglas Lanman, Shinsaku Hiura, Ramesh Raskar MIT Media Lab MIT Media Lab Camera Culture
38. Augmenting Plenoptic Function Wigner Distribution Function Traditional Light Field WDF Traditional Light Field Augmented LF Interference & Diffraction Interaction w/ optical elements ray optics based simple and powerful wave optics based rigorous but cumbersome
39. Light Fields Goal: Representing propagation, interaction and image formation of light using purely position and angle parameters Reference plane position angle LF propagation (diffractive) optical element LF LF LF LF LF propagation light field transformer
40.
41. LF Transformer: input to output LF Thin Elements: 6D General Case: 8D Angle Shift Invariant: 4D
42. Augmented LF framework 1. LF propagation (diffractive) optical element LF LF LF LF LF propagation 2. light field transformer 3. negative radiance Tech report, S. B. Oh et al. http://web.media.mit.edu/~raskar/RayWavefront/ 4. interference
48. Important Dates Submission: November 2, 2009 Notification : February 2, 2010 Topics Computational Cameras Multiple Images and Camera Arrays Computational Illumination Advanced Image and Video Processing Scientific Photography and Videography Organizing and Exploiting Photo & Video Collections Program Chairs Kyros Kutulakos, U. Toronto Rafael Piestun, U. Colorado Ramesh Raskar, MIT International Conference on Computational Photography (ICCP) March 29-30, 2010 MIT, Cambridge MA http://cameraculture.media.mit.edu/ iccp10 /
50. Converting LCD Screen = large Camera for 3D Interactive HCI and Video Conferencing Matthew Hirsch, Henry Holtzman Doug Lanman, Ramesh Raskar Siggraph Asia 2009 BiDi Screen
Inference and perception are important. Intent and goal of the photo is important. The same way camera put photorealistic art out of business, maybe this new artform will put the traditional camera out of business. Because we wont really care about a photo, merely a recording of light but a form that captures meaningful subset of the visual experience. Multiperspective photos. Photosynth is an example.
Pioneered by Nayar and Levoy Synthesis Minimal change of hardware Goals are often opposite (human perception) Use of non-visual data And Network
infinity-corrected ‘long-distance microscope’
Augmented plenoptic function the motivation, to augment lf, model diffraction in light field formulation
Multiplication in space Convolution in angle
more specifically, same lf propagation, Can we stay purely in ray-space and support propagation, diffraction and interference. I was highly inspired by Markus Testorf’s talk in Charlotte organized by Prof Fiddy. And he also took efforts to explain to us and pointed us to his two books on phase-space optics. In addition I am looking forward to Prof Alonso and my MIT colleague Anthony Accorsi’s talk. Plus Zhang and Levoy have clearly described a very useful subset of wave phenomenon that can be explained with traditional light field. Our goal in augmenting LF is however different. Personally, this has been my own path of discovery for how I can express complex wave phenomenon with rays.
Since we are adapting LCD technology we can fit a BiDi screen into laptops and mobile devices.
So here is a preview of our quantitative results. I’ll explain this in more detail later on, but you can see we’re able to accurately distinguish the depth of a set of resolution targets. We show above a portion of portion of views form our virtual cameras, a synthetically refocused image, and the depth map derived from it.
With the right synergy between capture and synthesis techniques, we go beyond traditional imaging and change the rules of the game.