SlideShare a Scribd company logo
1 of 49
Download to read offline
Scattering optical tomography
with discretized path integral
Toru	Tamaki	(Hiroshima	University)
Joint	work	with
Bingzhi Yuan,	Yasuhiro	Mukaigawa,
and	many	others
Computed Tomography
Multiplexing	Radiography	for	Ultra-Fast	Computed	Tomography
J.	Zhang,	G.	Yang,	Y.	Lee,	S.	Chang,	J.	Lu,	O.	Zhou University	of	North	Carolina	at	Chapel	
Hill https://www.aapm.org/meetings/07AM/VirtualPressRoom/LayLanguage/IIMultiplexing.asp
http://www.sciencekids.co.nz/pictures/humanbody/braintomography.html
Different Modalities
CT	:	http://www.hiroshima-u.ac.jp/news/show/id/13475/dir_id/19
PET	:	http://www.hiroshima-u.ac.jp/hosp/hitokuchi/p_vdo8u6.html	
MRI	:	http://www.hiroshima-u.ac.jp/news/show/id/14427/dir_id/19
CT
(X Ray)
MRI
(Magnetic	field)
PET
(Positron) (infrared)
Diffuse Optical Tomography (DOT)
,	 −
−	 19 9 5 ,	 ,	 3 ,	3	 ,
http://www.mext.go.jp/b_menu/shingi/gijyutu/gijyutu3/toushin/07091111/008.htm
Input
Light	pulse
Detector
Detector
Time
Time
Lee,	Kijoon.	“Optical	Mammography:	Diffuse	Optical	Imaging	of	Breast	
Cancer.”	World	Journal	of	Clinical	Oncology	2.1	(2011):	64–72.	PMC.	Web.	8	
June	2015.	http://dx.doi.org/10.5306/wjco.v2.i1.64
Our research target
X-ray	CT Our	target DOT
No	Radiation
Sharpness ( )
Scattering No Yes So	much
Method Radon trans. Ours PDE	by	FEM
http://www.sciencekids.co.nz/pictures/humanbody/braintomography.html
Lee,	Kijoon.	“Optical	Mammography:	Diffuse	Optical	Imaging	of	Breast	Cancer.”	World	Journal	of	Clinical	
Oncology	2.1	(2011):	64–72.	PMC.	Web.	8	June	2015.	http://dx.doi.org/10.5306/wjco.v2.i1.64
http://photosozai-database.com/modules/photo/photo.php?lid=664
Scattering
Volumetric	scattering	in	CG
1987
1993
1994
Tomoyuki Nishita,	Yasuhiro	Miyawaki,	and	Eihachiro Nakamae.	
1987.	A	shading	model	for	atmospheric	scattering	considering	
luminous	intensity	distribution	of	light	sources.	SIGGRAPH	Comput.	
Graph. 21,	4	(August	1987),	303-310.	DOI=10.1145/37402.37437	
http://doi.acm.org/10.1145/37402.37437	
Tomoyuki Nishita,	Takao	Sirai,	Katsumi	Tadamura,	and	Eihachiro Nakamae.	1993.	Display	of	the	earth	
taking	into	account	atmospheric	scattering.	In	Proceedings	of	the	20th	annual	conference	on	Computer	
graphics	and	interactive	techniques (SIGGRAPH	'93).	ACM,	New	York,	NY,	USA,	175-182.	
DOI=10.1145/166117.166140	http://doi.acm.org/10.1145/166117.166140	
Tomoyuki Nishita and	Eihachiro Nakamae.	1994.	Method	of	displaying	optical	effects	within	water	using	
accumulation	buffer.	In	Proceedings	of	the	21st	annual	conference	on	Computer	graphics	and	interactive	
techniques (SIGGRAPH	'94).	ACM,	New	York,	NY,	USA,	373-379.	DOI=10.1145/192161.192261	
http://doi.acm.org/10.1145/192161.192261	
Copyright	©	2011	by	the	Association	for	Computing	Machinery,	Inc.	(ACM).
Tomoyuki Nishita,	Yoshinori	Dobashi,	and	Eihachiro Nakamae.	1996.	Display	of	clouds	taking	into	account	
multiple	anisotropic	scattering	and	sky	light.	In	Proceedings	of	the	23rd	annual	conference	on	Computer	
graphics	and	interactive	techniques (SIGGRAPH	'96).	ACM,	New	York,	NY,	USA,	379-386.	
DOI=10.1145/237170.237277	http://doi.acm.org/10.1145/237170.237277	
Yonghao Yue,	Kei	Iwasaki,	Bing-Yu	Chen,	Yoshinori	Dobashi,	and	Tomoyuki Nishita.	2010.	Unbiased,	
adaptive	stochastic	sampling	for	rendering	inhomogeneous	participating	media.	ACM	Trans.	Graph. 29,	6,	
Article	177	(December	2010),	8	pages.	In	ACM	SIGGRAPH	Asia	2010	papers (SIGGRAPH	ASIA	'10).	
DOI=10.1145/1882261.1866199	http://doi.acm.org/10.1145/1882261.1866199	
Volumetric	scattering	in	CG
1996
2010
Copyright	©	2011	by	the	Association	for	Computing	Machinery,	Inc.	(ACM).
Computer vision approach
to the inverse problem
Observed
image
Forward	model
Generated	by	Computer	Graphics
Image
Computer	Graphics
GenerationAnalysis
Computer	Vision
Model	parameters
2
minimizing	observations	and	model	prediction
Vision / CG approaches
University of British Columbia
oto of an acquisition rig for fluid phenomena, consisting of 5–16 strobe-synchronized con
ne of the cameras. Right: Reconstruction of an unsteady two-phase flow at different points in
e introduced in this paper manages to capture the fine-scale features in this challenging datas
me, 6 min/frame.
stic Tomography
n 3D Imaging of Mixing Fluids
Matthias B. Hullin Wolfgang Heidrich
y of British Columbia
mena, consisting of 5–16 strobe-synchronized consumer cameras. Middle:
of an unsteady two-phase flow at different points in time. Note how the novel
ture the fine-scale features in this challenging dataset. SAD regularizer, 100M
tur-
ight
stic
We
ial-
ude
gh-
ally,
can
hout
1 Introduction
The capture of dynamic 3D phenomena has been the subject of con-
siderable research in computer graphics, extending to both the scan-
ning of deformable shapes, such as human bodies (e.g. [de Aguiar
et al. 2007; de Aguiar et al. 2008]), faces (e.g. [Bickel et al. 2007;
Alexander et al. 2009]), and garments (e.g. [White et al. 2007;
Bradley et al. 2008]), as well as natural phenomena including liquid
surfaces [Ihrke and Magnor 2004; Wang et al. 2009], gases [Atch-
eson et al. 2008], and flames [Ihrke and Magnor 2004]. Access
to this kind of data is not only useful for direct re-rendering, but
also for deepening our understanding of a specific phenomenon.
The data can be used to derive heuristic or data-driven models, or
simply to gain a qualitative understanding of what a phenomenon
reconstructed view (60◦) 90◦
multiplication flame sheet blobs input
torch” dataset. The same two input images were used for all methods.
ews of the reconstructed flame are shown.
prove useful in more general contexts. This includes sparse-
view tomography problems in medical diagnostics [17] and
0◦ reconstructed view (30◦) reconstructed view (60◦) 90◦
input multiplication flame sheet blobs multiplication flame sheet blobs input
Figure 7. Views of three different reconstructions of the “torch” dataset. The same two input images were used for all methods.
Since all methods reconstruct the input views, only novel views of the reconstructed flame are shown.
reconstruction method (views)
RMS error
input all
prove useful in more general contexts. This includes sparse-
view tomography problems in medical diagnostics [17] and
reconstructed view (30◦) reconstructed view (60◦) 90◦
multiplication flame sheet blobs multiplication flame sheet blobs input
Views of three different reconstructions of the “torch” dataset. The same two input images were used for all methods.
methods reconstruct the input views, only novel views of the reconstructed flame are shown.
ruction method (views)
RMS error
input all
prove useful in more general contexts. This includes sparse-
view tomography problems in medical diagnostics [17] and
ew (30◦) reconstructed view (60◦) 90◦
eet blobs multiplication flame sheet blobs input
uctions of the “torch” dataset. The same two input images were used for all methods.
ws, only novel views of the reconstructed flame are shown.
RMS error
nput all
prove useful in more general contexts. This includes sparse-
view tomography problems in medical diagnostics [17] and
900 30 60
input inputreconstructed
Gregson+	2012TOG
Hasinoff and	Kutulakos 2007PAMI
No	scattering
Figure 2: Eight images captured during a single sweep of the laser plane.
off the mirror, the laser is spread into a vertical sheet of light by the
cylindrical lens. This sheet of light creates an illuminated plane of
smoke, which the camera views from a perpendicular direction.
During scanning, the galvanometer sweeps the laser sheet from
the front to the back of the smoke volume, while the camera cap-
tures images of the scattered light. The galvanometer then rapidly
jumps back to the front of the smoke volume and the process is re-
peated. In this work we capture 200 images during each volume
scan. At 5000 frames per second, this allows the volume to be
scanned at 25 Hz. We placed both the camera and galvanometer 3
meters from the working volume, and chose the camera lens, cylin-
drical lens, and galvanometer scan angle so that the camera field
of view and laser frustum were both 6 . This provided a working
volume of approximately 30x30x30 cm.
The galvanometer control software also triggers the camera, al-
lowing the frame capture to be synchronized with the position of
the laser slice. This assures that corresponding slices in successive
captured volumes are sampled at the same position.
The high frame rate and the small fraction of the total light scat-
tered to the camera require a relatively powerful laser. We use an
argon ion laser that produces 3 watts of light at a set of wavelengths
closely clustered around 500 nm.
3.1 Calibration
the laser scanner. Repeating this for another dot gives another such
line, and intersecting the two lines determines Lc.
Similarly, any two identified points on the laser line for a given
slice s, together with Lc, determines the plane Ps of illumination.
We observed that the intensity of illumination varied as a func-
tion of slice s and of elevation angle f within each laser slice. To
correct for this, we placed a uniform diffuse plane at an angle where
it was visible to both the laser and the camera and captured a se-
quence of frames. As the laser scans this reflectance standard, the
observed intensities are directly proportional to the intensity of laser
light for each laser plane and pixel position. Having calibrated the
laser planes Ps and the camera position, we can find the 3D point
v corresponding to each laser plane and pixel position, and apply
the appropriate intensity compensation when computing densities
along the ray from the laser position Lc through the point v. Al-
though this corrects for the relative intensity differences across the
laser frustum, we should note that we do not measure the absolute
intensity of the laser. The resulting density fields are therefore only
recovered up to an unknown scale factor.
We have assumed that image intensity values can be directly in-
terpreted as density values. This assumes that the effects of extinc-
tion and multiple scattering within the volume are negligible. Al-
though we found that this assumption held for the smoke volumes
we captured, this assumption will fail when the densities become
very high such as for thick smoke or dyes in water, or when the
Smoke	only
Hawkins+	2005TOGAn Inverse Problem Approach for Automatically Adjusting t
Rendering Clouds Using Photographs
Yoshinori Dobashi1,2
Wataru Iwasaki1
Ayumi Ono1
Tsuyoshi Yamamoto1
Yonghao
1
Hokkaido University 2
JST, CREST 3
The University
(a) (b)
(d) (e)
Figure 1: Examples of our method. The parameters for rendering clouds are estimated from the real phot
the top left corner of each image. The synthetic cumulonimbus clouds are rendered using the estimated par
Abstract
Clouds play an important role in creating realistic images of out-
door scenes. Many methods have therefore been proposed for dis-
playing realistic clouds. However, the realism of the resulting im-
ages depends on many parameters used to render them and it is
often difficult to adjust those parameters manually. This paper pro-
Links: DL PDF
1 Introduction
Clouds are important elements w
door scenes to enhance realism.
ten employed and the intensities
into account the scattering and ab
realistic clouds. However, one of
Dobashi+	2012TOG
Narasimhan+	2006SIGGRAPH
Gkioulekasj+	2013SIGGRAPHAsia
Appears in the SIGGRA
Inverse Volume Renderin
Ioannis Gkioulekas
Harvard University
Shuang Zhao
Cornell University
Kavi
Cornell
Figure 1: Acquiring scattering parameters. Left: Samples of two m
Uniform	parameter	estimation
INTRODUCTION TO
SCATTERING
Scattering model
• Volumetric	scattering
Participating	medium
(fog,	cloud,	smoke,	etc)
Types of Phase function
Forward	scattering
Backward	scattering
Phase	function
Isotropic	scattering
input
Output	distribution
Scattering model
Phase	function
Rendering for CG
the	line	of	sight
Numerical	integration	along
Light in participating medium
Attenuation
Absorption
Out-scattering
In-scattering
Multiple	scattering
Emission
Usually	ignored
Absorption
(Microscopic)	Beer-Lambert	Law
Absorption	coefficient Infinitesimal
path	length
(Macroscopic)	Beer-Lambert	Law
Integration
from		0	to	d
or
Exponential	attenuation
When	σa is
not	constant
optical	depth
http://www.sciencekids.co.nz/pictures/humanbody/braintomography.html
CT:	estimating
X-Ray	absorption
Out-scattering
(Microscopic)	Beer-Lambert	Law
Scattering	coefficient Infinitesimal
path	length
(Macroscopic)	Beer-Lambert	Law
Integration
from		0	to	d
or
Exponential	attenuation
Attenuation = absorption + out-scattering
Absorption
Out-scattering
Attenuation
Extinction	coefficient
Exponential	attenuation
Emission
Emission
Absorption	coefficient
(Emission	of	absorbed	energy)
Emitted	light
In-scattering
Phase	function
(from	ω’ to	ω at	x)
(usually	common	for	all	x)
Incident	light
(from	ω’ at	x )
Integrating	all	incoming	lights	over	the	sphere
or
Rendering equation
Light	transport	equation
Volume	rendering	equation	(differential form)
In-scattering	termAttenuation	term
DOT with Diffuse Approximation
In-scattering	termAttenuation	termTime-resolved
observation
PDE	solved	by	FEM
No	forward
scattering
Blurred	results
Problem
Diffuse	Approximation
(first order	spherical	
harmonics	approximation)
http://commons.wikimedia.org/wiki/File:Harmoniki.png
Main assumption
Rendering equation
Light	transport	equation
Volume	rendering	equation	(integral form)
In-scattering	term
Attenuation	
term
Attenuation	of	
scattered	light
Light Paths:
from light sources to cameras
Light	source
Camera	/	eye
Light Paths:
from light sources to cameras
Light	source
Camera	/	eye
Path (Ray) tracing:
from cameras to light source
Light	source
Camera	/	eye
Paths
Path integral approach
introduced	by	Eric	Veach (1997	Ph.D thesis)
http://graphics.stanford.edu/papers/veach_thesis/
Pixel	value	of	pixel	j
Paths	of	length	kAll	possible
paths	of	length	k
2
Path	measure
Product	of	measures	at	each	vertex
All	possible	paths
of	any	length (path	space)
Monte	Carlo
Integration
Path integral for participating media
introduced	by	Mark	Pauly (1999	Master	thesis)
http://graphics.stanford.edu/~mapauly/publications.html
Pixel	value	of	pixel	j
measure
Area	measure	if	surface
Voxel	measure	if	volume
Metropolis	Light	Transport	for	Participating	Media
Mark	Pauly,	Thomas	Kollig,	Alexander	Keller
Eurographics Workshop	on	Rendering,	2000
Details of paths in participating media
Anders	Wang	Kristensen,	Efficient	Unbiased	Rendering	using	Enlightened	Local	Path	Sampling,	Ph.D thesis,	2010	URL
Attenuation
Geometric	term
BRDF	/	scattering	coeff,	phase	function
surface
inside
Cos	term
All	paths	of	length	k
measure
Camera	response
function
To	be	estimated!
PROPOSED METHOD
TO THE INVERSE PROBLEM
Discretization. How?
Extinction	coefficient
at	voxel	b
Length	that
the	path	xi->xi+1 passes	voxel	b
Z 1
0
t((1 s)xi + sxi+1)ds ⇡
X
b
t[b] dxi,xi+1
[b] = t · dxi,xi+1
Z 1
0
t((1 s)xi + sxi+1)ds ⇡
X
b
t[b] dxi,xi+1
[b] = t · dxi,xi+1
Z 1
0
t((1 s)xi + sxi+1)ds ⇡
X
b
t[b] dxi,xi+1
[b] = t · dxi,xi+1
vectorize
Sparse	vector	of	coefficients
Sparse	vector	of	path	length
Inner	product
of	sparse	vectors
Discretization. How?
Extinction	coefficient
at	voxel	b
Length	that
the	path	xi->xi+1 passes	voxel	b
Z 1
0
t((1 s)xi + sxi+1)ds ⇡
X
b
t[b] dxi,xi+1
[b] = t · dxi,xi+1
I =
1X
k=1
X
⌦k
Le(x0, x1)
e t·dx0,x1
kx0 x1k2
"k 1Y
i=1
s(xi)fp(xi 1, xi, xi+1)
e t·dxi,xi+1
kxi xi+1k2
#
We(xk 1, xk)dµk
I =
1X
k=1
X
⌦k
Le(x0, x1)
e
R 1
0 t((1 s)x0+sx1)ds
kx0 x1k2
"k 1Y
i=1
s(xi)fp(xi 1, xi, xi+1)
e
R 1
0 t((1 s)xi+sxi+1)ds
kxi xi+1k2
#
We(xk 1, xk)dµk
Z 1
0
t((1 s)xi + sxi+1)ds ⇡
X
b
t[b] dxi,xi+1
[b] = t · dxi,xi+1
Z 1
0
t((1 s)xi + sxi+1)ds ⇡
X
b
t[b] dxi,xi+1
[b] = t · dxi,xi+1
before
after
Addition of Vectorized paths
I =
1X
k=1
Le(x 1, x0)
X
⌦k
"k 1Y
i=1
s(xi)fp(xi 1, xi, xi+1)
# "
e t·(
Pk 1
i=0 dxi,xi+1
)
Qk 1
i=0 kxi xi+1k2/dVi
#
dA 1 dVk dAk+1
I =
1X
k=1
Le(x 1, x0)
X
⌦k
Hk e t·Dk
I =Le(x 1, x0)
X
⌦
Hk e t·Dk
I =
1X
k=1
X
⌦k
Le(x0, x1)
e t·dx0,x1
kx0 x1k2
"k 1Y
i=1
s(xi)fp(xi 1, xi, xi+1)
e t·dxi,xi+1
kxi xi+1k2
#
We(xk 1, xk)dµk
Assumed	to	be	known
Everything	else
The forward model
x_k-1
x_k
x1
x0
Source	location	s
Is,t =Le
X
⌦
Hk(s,t) e t·Dk(s,t)
Observation
location	t
Observation	at	s,t
x_k-1
x_k
x1
x0
Source	location	s
Observation
location	t
x_k-1
x_k
x1
x0
Source	location	s
Observation
location	t
The inverse problem
t 0subject	to
Coefficient	must	be	positivemodelobservation
nstrained problem: Interior point
introduce a constrained optimization with inequality constraints of the form;
min
t
f0( t) subject to fi(x)  0, i = 1, · · · , m, (37)
t 2 <N⇥M
is a real vector and f0, · · · , fm : <N⇥M
! < are twice continuously differen-
dea is to approximate it as an unconstrained problem. Using Lagrange multipliers, we can
rite problem (37) as
min
t
f0( t) +
mX
i=1
I(fi( t)), (38)
Solved	by	the	log-barrier	interior	point	method
Cost	function	with	constraints
The idea is to approximate it as an unconstrained problem. Using La
first rewrite problem (37) as
min
t
f0( t) +
mX
i=1
I(fi( t)),
where I : < ! < is an indicator function which keeps the solution insid
I(f) =
(
0, f  0
1, f > 0.
The problem (38) now has no inequality constraints, while it is not diffe
The barrier method26
is an interior point method which introduces a lo
to approximate the indicator function I as follows;
ˆI(f) = (1/t) log( f),
where t > 0 is a parameter to adjust the accuracy of approximation. The
to infinity rapidly as f goes close to 0 while it is close to 0 when f are fa
is differentiable, we have
min
t
f0( t) +
mX
i=1
(1/t) log( fi( t)),
or equivalently,
mX
Update	t	(larger)
Initial	value t 0
repeat
Solve
Log-barrier	functions
by	Quasi-Newton
Layer
2D3D
Target model
2D
Smallest	cost
Rough	approximation
Ideal
Costly	in	space	and	time
Second	option
Less	expensive
but	still	large	cost
Proposed 2D layered model
• Light	scatters:
– Layer	by	layer
– At	voxel	centers
– Forward	only
• Subject	to	a	simplified	
phase	function
– With	uniform/known	
scattering	coefficients
Enumerating paths
Source	location	s
Observation
location	t
Source	location	s
Observation
location	t
Exhaustive	(with	thresholding) (Bi-directional)	Monte	Carlo-like
Different directions
he pro-
material
a). The
ficients
coeffi-
t much
to lit a
etector
s from
unction
s set to
Figure
extinc-
gray) is
in the
se part
(voxels
blurred
mental
(a) (b)
(c) (d)
IV. NUMERICAL SIMULATIONS
A. Simple material
We describe numerical simulation results of the pro-
posed method for evaluation. A two-dimensional material
is divided into 10⇥10 voxels as shown in Figure 4(a). The
material has almost homogeneous extinction coefficients
of value 0.05 except few voxels with much higher coeffi-
cients of 0.2, which means those voxels absorb light much
more than other voxels. A light source is assumed to lit a
light at each of m positions to the top layer, and a detector
observes the outgoing light at each of m positions from
the bottom layer. The variance 2
of Gaussian function
(1) for scattering from one layer to the next layer is set to
2. The upper bound u in Eq. (8) is set to 1.
Estimated extinction coefficients are shown in Figure
4(b). The homogeneous part of the material with extinc-
tion coefficients of 0.05 (voxels shaded in light gray) is
reasonably estimated because estimated values fall in the
range between 0.04 to 0.06. In contrast, the dense part
of the material with extinction coefficients of 0.2 (voxels
shaded in dark gray) is not estimated well but looks blurred
vertically. This blur effect is due to the experimental
setting that the light paths go vertically from top to bottom
inside the material.
This effect can be verified by simply rotating the setting
by 90 degrees (now the configuration is a horizontal one)
and applying the proposed method. Figure 4(c) shows
the estimation result when the light source is on the left
(a) (b)
(c) (d)
Figure 4: Simulation results of a material of size 10 ⇥ 10.
(a) Ground truth of the distribution of extinction coeffi-
cients shown as values in each voxel as well as by voxel
colors. (b) Estimated extinction coefficients. A light source
and detector are above and blow the media. (c) Estimated
extinction coefficients. A light source and detector are left
IV. NUMERICAL SIMULATIONS
A. Simple material
We describe numerical simulation results of the pro-
posed method for evaluation. A two-dimensional material
is divided into 10⇥10 voxels as shown in Figure 4(a). The
material has almost homogeneous extinction coefficients
of value 0.05 except few voxels with much higher coeffi-
cients of 0.2, which means those voxels absorb light much
more than other voxels. A light source is assumed to lit a
light at each of m positions to the top layer, and a detector
observes the outgoing light at each of m positions from
the bottom layer. The variance 2
of Gaussian function
(1) for scattering from one layer to the next layer is set to
2. The upper bound u in Eq. (8) is set to 1.
Estimated extinction coefficients are shown in Figure
4(b). The homogeneous part of the material with extinc-
tion coefficients of 0.05 (voxels shaded in light gray) is
reasonably estimated because estimated values fall in the
range between 0.04 to 0.06. In contrast, the dense part
of the material with extinction coefficients of 0.2 (voxels
shaded in dark gray) is not estimated well but looks blurred
vertically. This blur effect is due to the experimental
setting that the light paths go vertically from top to bottom
inside the material.
This effect can be verified by simply rotating the setting
by 90 degrees (now the configuration is a horizontal one)
and applying the proposed method. Figure 4(c) shows
(a) (b)
(c) (d)
Figure 4: Simulation results of a material of size 10 ⇥ 10.
(a) Ground truth of the distribution of extinction coeffi-
cients shown as values in each voxel as well as by voxel
colors. (b) Estimated extinction coefficients. A light source
IV. NUMERICAL SIMULATIONS
A. Simple material
We describe numerical simulation results of the pro-
posed method for evaluation. A two-dimensional material
is divided into 10⇥10 voxels as shown in Figure 4(a). The
material has almost homogeneous extinction coefficients
of value 0.05 except few voxels with much higher coeffi-
cients of 0.2, which means those voxels absorb light much
more than other voxels. A light source is assumed to lit a
light at each of m positions to the top layer, and a detector
observes the outgoing light at each of m positions from
the bottom layer. The variance 2
of Gaussian function
(1) for scattering from one layer to the next layer is set to
2. The upper bound u in Eq. (8) is set to 1.
Estimated extinction coefficients are shown in Figure
4(b). The homogeneous part of the material with extinc-
tion coefficients of 0.05 (voxels shaded in light gray) is
reasonably estimated because estimated values fall in the
range between 0.04 to 0.06. In contrast, the dense part
of the material with extinction coefficients of 0.2 (voxels
shaded in dark gray) is not estimated well but looks blurred
vertically. This blur effect is due to the experimental
setting that the light paths go vertically from top to bottom
inside the material.
This effect can be verified by simply rotating the setting
by 90 degrees (now the configuration is a horizontal one)
and applying the proposed method. Figure 4(c) shows
the estimation result when the light source is on the left
side of the material, and the detector is on the right
side. The estimated extinction coefficients are blurred now
horizontally because light paths go from left to right.
A simple trick to integrate these two results is to take
the average of them, which is shown in Figure 4(d). The
(a) (b)
(c) (d)
Figure 4: Simulation results of a material of size 10 ⇥ 10.
(a) Ground truth of the distribution of extinction coeffi-
cients shown as values in each voxel as well as by voxel
colors. (b) Estimated extinction coefficients. A light source
and detector are above and blow the media. (c) Estimated
extinction coefficients. A light source and detector are left
and right sides of the media. (d) Average of (b) and (c).
Cost function values over iterations are shown in Figure
Averaging
Vertical
estimates
Horizontal
EstimatesToru	Tamaki,	Bingzhi Yuan,	Bisser	Raytchev,	Kazufumi	Kaneda,	Yasuhiro	
Mukaigawa,	"Multiple-scattering	Optical	Tomography	with	Layered	Material,"	
The	9th	International	Conference	on	Signal-Image	Technology	and	Internet-
Based	Systems	(SITIS	2013),	December	2-5,	2013,	Kyoto,	Japan.	(2013	12)
Joint optimization
he pro-
material
a). The
ficients
coeffi-
t much
to lit a
etector
s from
unction
s set to
Figure
extinc-
gray) is
in the
se part
(voxels
blurred
mental
(a) (b)
(c) (d)
Bingzhi Yuan,	Toru	Tamaki,	Takahiro	Kushida,	Bisser	Raytchev,	Kazufumi	Kaneda,	Yasuhiro	Mukaigawa and	
Hiroyuki	Kubo,	"Layered	optical	tomography	of	multiple	scattering	media	with	combined	constraint	
optimization",	in	Proc.	of	FCV	2015	:	21st	Korea-Japan	joint	Workshop	on	Frontiers	of	Computer	Vision,	January	
28--30,	2015,	Shinan Beach	Hotel,	Mokpo,	KOREA.
results of numerical simulation
Ground	truth
(a) (b) (c) (d) (e)
(a) (b) (c) (d) (e)
• 20x20	grid	of	20[mm]x20[mm]
• Extinction	coefficients:	between	1.05	and	1.55	[mm-1]
• Scattering	coefficients	(known):	1	[mm-1]
• Phase	function	(known):	a	simplified	approximation
• Fixed	742	paths	for	each	s,t
(exhaustive	with	th.)
• Joint	optimization	with	
interior	point	method
Estimated	results
Shepp-Logan phantom
20	x	20
Comparison with DOT
GT
Ours
DOT (GN)
DOT (PD)
(a) (b) (c) (d) (e)
ig 11 Numerical simulation results for a grid of size 24 ⇥ 24 with different inverse solver. Darker shades of gra
Ground
Truth
Ours
DOT
(Gauss-Newton)
DOT
(Primal-Dual	interior	point)
• 24x24	grid	of	24[mm]x24[mm]
• 24x24x2	=	1152	triangle	meshes	for	FEM
• 16	light	sources	and	16	detectors	around	the	square
• With	EIDORS	(Electrical	Impedance	
Tomography	and	Diffuse	Optical	Tomography	
Reconstruction	Software)	
eidors3d.sourceforge.net
Accuracy and cost
(a) (b) (c) (d) (e)
RMSE
Ours 0.007662 0.01244 0.026602 0.021442 0.051152
2
= 0.4 (0.730%) (1.18%) (2.53%) (2.04%) (4.87%)
DOT (GN) 0.053037 0.060597 0.7605 0.059534 0.0855
(5.05%) (5.77%) (7.53%) (5.67%) (8.14%)
DOT (PD) 0.052466 0.0626 0.081081 0.066042 0.080798
(5.25%) (5.97%) (8.11%) (6.60%) (8.08%)
Computation
time [s]
Ours
257 217 382 306 5042
= 0.4
DOT (GN) 0.397 0.390 0.407 0.404 0.453
DOT (PD) 1.11 1.09 1.14 1.08 1.15
Table 2 RMSEs and computation time of the numerical simulations for five different media (a)–(e) with grid size of
4 ⇥ 24 with our proposed method and DOT with two solvers. Numbers in the brackets are relative errors of RMSE
o the background extinction coefficient values (i.e., 1.05).
GT
2 = 0.2
2 = 0.4
(a) (b) (c) (d) (e)
Fig 7 Numerical simulation results for a grid of size 20 ⇥ 20. Darker shades of gray represent larger values (more
light is absorbed at the voxel). The graybar shows the preojetcion from gray scale to value. The first row shows the
groundtruth of the five different media (a)–(e) used for the simulation. The second row shows the estimated results
when the 2
in the Gaussian phase function equals to 0.2. The last row shows the estimated results when the 2
= 0.4.
in the middle and bottom rows, and the root-mean-squared error (RMSE) shown in Table 5 is small
enough with 0.0075/1.05 = 0.7% of coefficient values. The other media, shown in columns (b)–
(e), have more complex distributions of the extinction coefficients. In the top part of Table 5, we
use the RMSEs to show the quality of the estimated results. The ratio in the brackets shows the
relative magnitude of RMSE compared with the majority extinction value or the background in the
Fig 7 Numerical simulation results for a grid of size 20⇥20. Darker shades of gray represent larger values (more light
is absorbed at the voxel). The bars on the side shows extinction coefficient values in greyscale. The first row shows
the ground truth of the five different media (a)–(e) used for the simulation. The second and third rows show estimated
results for 2 = 0.2 and 2 = 0.4, respectively, of the Gaussian phase function.
much higher coefficients of 1.2 (voxels shaded in dark gray), which means that those voxels ab-
sorb much more light than other voxels. The coefficients are estimated reasonably well as shown
in the middle and bottom rows, and the root-mean-squared error (RMSE) shown in Table 1 is small
enough with the relative error of 0.0075/1.05 = 0.7% to the background coefficient value. The
Our	implementation:	MATLAB
Computation cost NOW!
Devices for Scattering Tomography
Y.Ishii,	T.Arai,	Y.Mukaigawa,	J.Tagawa,	Y.Yagi,	"Scattering	Tomography	by	Monte	Carlo	Voting",	Proc.	IAPR	International	Conference	on	Machine	Vision	Applications	
(MVA2013),	May.	2013.
ubstances
ght scattering
Bread
ifficult to detect foreign substances
use of the influence of light scattering
),( xT
), x
x
+ … +
Needle
),( xE
Campus & Location
ロゴタイプ・ロゴマーク/学旗/同窓会
学 旗
ロゴタイプ・ロゴマーク
ロゴタイプ ロゴマーク
s & Location
イプ・ロゴマーク/学旗/同窓会
ロゴマーク
ロゴタイプ ロゴマーク
Devices for Scattering Tomography
Ryuichi Akashi,	Hajime	Nagahara,	Yasuhiro	Mukaigawa	and	Rin-ichiro	Taniguchi,	Scattering	Tomography	Using	Ellipsoidal	Mirror,	FCV2015.
. 8. Experimental results for an object with two obstacles. (Left) actual
figuration; (Middle) image from wrap-around viewing; (Right) image from
gle-direction viewing.
Camera
Projector
Ellipsoidal mirror
Fig. 11. Projector-camera system
attered by and passes through the target object is reflected
ck and directed toward the camera lens at the other focal
int. In consequence of this setup, we can observe scattering
(a) Slit board (b) Grid pattern
Fig. 12. (a) Slit board and (b) projected grid pattern.
Camera
Projector
Screen
Slit
Beam splitter
Fig. 13. Optical layout during alignment
uring aligning, we replaced the ellipsoidal mirror with
een and placed the slit board in front of the screen.
13 shows the optical layout during the camera-projector
(a) Before alignment
Fig. 14. Alignm
Fig. 15. Cylindrical scattering ob-
ject made of silicon resin with a
single metal obstacle.
Fig
wr
and metal wires as obstacles. Th
in shape with diameter of 27 mm
as the darkened filament inside o
We give an image for the estim
bution for this real object (Fig. 16
Fig. 10. Configuration of the optical system for wrap-around viewing
pattern
grid pattern.
Screen
Slit
(a) Before alignment (b) After alignment
Fig. 14. Alignment result
Fig. 15. Cylindrical scattering ob-
ject made of silicon resin with a Fig. 16. Image obtained using
Real	object
Estimation
Summary
• Optical	tomography	for	scattering	media
• Observation	model:	path	integral
• Least-square	approach	with	constraints
• Solver:	interior	point	method
• Experiments:	numerical	simulation
• Future	work:	a	lotGT
GT
2 = 0.2

More Related Content

What's hot

Basics of ct lecture 2
Basics of ct  lecture 2Basics of ct  lecture 2
Basics of ct lecture 2Gamal Mahdaly
 
Total Variation-Based Reduction of Streak Artifacts, Ring Artifacts and Noise...
Total Variation-Based Reduction of Streak Artifacts, Ring Artifacts and Noise...Total Variation-Based Reduction of Streak Artifacts, Ring Artifacts and Noise...
Total Variation-Based Reduction of Streak Artifacts, Ring Artifacts and Noise...Jan Michálek
 
Ct image quality artifacts and it remedy
Ct image quality artifacts and it remedyCt image quality artifacts and it remedy
Ct image quality artifacts and it remedyYashawant Yadav
 
An Innovative Use of X-ray Computed Tomography in Composite Impact Damage Cha...
An Innovative Use of X-ray Computed Tomography in Composite Impact Damage Cha...An Innovative Use of X-ray Computed Tomography in Composite Impact Damage Cha...
An Innovative Use of X-ray Computed Tomography in Composite Impact Damage Cha...Fabien Léonard
 
Basics of ct lecture 1
Basics of ct  lecture 1Basics of ct  lecture 1
Basics of ct lecture 1Gamal Mahdaly
 
tomography tomography - Presentation Transcript 1. PRINCIPLE OFTOMOGRAPHY...
tomography tomography - Presentation Transcript     1. PRINCIPLE OFTOMOGRAPHY...tomography tomography - Presentation Transcript     1. PRINCIPLE OFTOMOGRAPHY...
tomography tomography - Presentation Transcript 1. PRINCIPLE OFTOMOGRAPHY...Prem Murti
 
5lab components of ct scanner
5lab components of ct scanner5lab components of ct scanner
5lab components of ct scannerKhaleeque Memon
 
CT ITS BASIC PHYSICS
CT ITS BASIC PHYSICSCT ITS BASIC PHYSICS
CT ITS BASIC PHYSICSDEEPAK
 
On Dose Reduction and View Number
On Dose Reduction and View NumberOn Dose Reduction and View Number
On Dose Reduction and View NumberKaijie Lu
 
CT Scan Image reconstruction
CT Scan Image reconstructionCT Scan Image reconstruction
CT Scan Image reconstructionGunjan Patel
 
CT Image Reconstruction- Avinesh Shrestha
CT Image Reconstruction- Avinesh ShresthaCT Image Reconstruction- Avinesh Shrestha
CT Image Reconstruction- Avinesh ShresthaAvinesh Shrestha
 
Computed tomography by jay&jay
Computed tomography by jay&jayComputed tomography by jay&jay
Computed tomography by jay&jayPatel Jay
 
Reduction of Azimuth Uncertainties in SAR Images Using Selective Restoration
Reduction of Azimuth Uncertainties in SAR Images Using Selective RestorationReduction of Azimuth Uncertainties in SAR Images Using Selective Restoration
Reduction of Azimuth Uncertainties in SAR Images Using Selective RestorationIJTET Journal
 
GPS Tracking System Coupled With Image Processing In Traffic Signals to Enhan...
GPS Tracking System Coupled With Image Processing In Traffic Signals to Enhan...GPS Tracking System Coupled With Image Processing In Traffic Signals to Enhan...
GPS Tracking System Coupled With Image Processing In Traffic Signals to Enhan...ijcsit
 
Computed Tomography and Spiral Computed Tomography
Computed Tomography and Spiral Computed Tomography Computed Tomography and Spiral Computed Tomography
Computed Tomography and Spiral Computed Tomography JAMES JACKY
 
Chapter 6 image quality in ct
Chapter 6 image quality in ct Chapter 6 image quality in ct
Chapter 6 image quality in ct Muntaser S.Ahmad
 

What's hot (20)

Basics of ct lecture 2
Basics of ct  lecture 2Basics of ct  lecture 2
Basics of ct lecture 2
 
Total Variation-Based Reduction of Streak Artifacts, Ring Artifacts and Noise...
Total Variation-Based Reduction of Streak Artifacts, Ring Artifacts and Noise...Total Variation-Based Reduction of Streak Artifacts, Ring Artifacts and Noise...
Total Variation-Based Reduction of Streak Artifacts, Ring Artifacts and Noise...
 
Ct image quality artifacts and it remedy
Ct image quality artifacts and it remedyCt image quality artifacts and it remedy
Ct image quality artifacts and it remedy
 
20320130406015
2032013040601520320130406015
20320130406015
 
X ray tomography
X ray tomographyX ray tomography
X ray tomography
 
Chapter 4 image display
Chapter 4 image displayChapter 4 image display
Chapter 4 image display
 
An Innovative Use of X-ray Computed Tomography in Composite Impact Damage Cha...
An Innovative Use of X-ray Computed Tomography in Composite Impact Damage Cha...An Innovative Use of X-ray Computed Tomography in Composite Impact Damage Cha...
An Innovative Use of X-ray Computed Tomography in Composite Impact Damage Cha...
 
Basics of ct lecture 1
Basics of ct  lecture 1Basics of ct  lecture 1
Basics of ct lecture 1
 
Minor_project
Minor_projectMinor_project
Minor_project
 
tomography tomography - Presentation Transcript 1. PRINCIPLE OFTOMOGRAPHY...
tomography tomography - Presentation Transcript     1. PRINCIPLE OFTOMOGRAPHY...tomography tomography - Presentation Transcript     1. PRINCIPLE OFTOMOGRAPHY...
tomography tomography - Presentation Transcript 1. PRINCIPLE OFTOMOGRAPHY...
 
5lab components of ct scanner
5lab components of ct scanner5lab components of ct scanner
5lab components of ct scanner
 
CT ITS BASIC PHYSICS
CT ITS BASIC PHYSICSCT ITS BASIC PHYSICS
CT ITS BASIC PHYSICS
 
On Dose Reduction and View Number
On Dose Reduction and View NumberOn Dose Reduction and View Number
On Dose Reduction and View Number
 
CT Scan Image reconstruction
CT Scan Image reconstructionCT Scan Image reconstruction
CT Scan Image reconstruction
 
CT Image Reconstruction- Avinesh Shrestha
CT Image Reconstruction- Avinesh ShresthaCT Image Reconstruction- Avinesh Shrestha
CT Image Reconstruction- Avinesh Shrestha
 
Computed tomography by jay&jay
Computed tomography by jay&jayComputed tomography by jay&jay
Computed tomography by jay&jay
 
Reduction of Azimuth Uncertainties in SAR Images Using Selective Restoration
Reduction of Azimuth Uncertainties in SAR Images Using Selective RestorationReduction of Azimuth Uncertainties in SAR Images Using Selective Restoration
Reduction of Azimuth Uncertainties in SAR Images Using Selective Restoration
 
GPS Tracking System Coupled With Image Processing In Traffic Signals to Enhan...
GPS Tracking System Coupled With Image Processing In Traffic Signals to Enhan...GPS Tracking System Coupled With Image Processing In Traffic Signals to Enhan...
GPS Tracking System Coupled With Image Processing In Traffic Signals to Enhan...
 
Computed Tomography and Spiral Computed Tomography
Computed Tomography and Spiral Computed Tomography Computed Tomography and Spiral Computed Tomography
Computed Tomography and Spiral Computed Tomography
 
Chapter 6 image quality in ct
Chapter 6 image quality in ct Chapter 6 image quality in ct
Chapter 6 image quality in ct
 

Similar to Scattering optical tomography with discretized path integral

Physics of Multidetector CT Scan
Physics of Multidetector CT ScanPhysics of Multidetector CT Scan
Physics of Multidetector CT ScanDr Varun Bansal
 
Purkinje imaging for crystalline lens density measurement
Purkinje imaging for crystalline lens density measurementPurkinje imaging for crystalline lens density measurement
Purkinje imaging for crystalline lens density measurementPetteriTeikariPhD
 
computedtomography-170122041110.pdf
computedtomography-170122041110.pdfcomputedtomography-170122041110.pdf
computedtomography-170122041110.pdfVanshikaGarg76
 
(ACCAS2016)Laser beam steering system for epiduroscopic laser treatment A fea...
(ACCAS2016)Laser beam steering system for epiduroscopic laser treatment A fea...(ACCAS2016)Laser beam steering system for epiduroscopic laser treatment A fea...
(ACCAS2016)Laser beam steering system for epiduroscopic laser treatment A fea...Heechul Kim
 
A new hybrid method for the segmentation of the brain mris
A new hybrid method for the segmentation of the brain mrisA new hybrid method for the segmentation of the brain mris
A new hybrid method for the segmentation of the brain mrissipij
 
LCU RDG 402 PRINCIPLES OF COMPUTED TOMOGRAPHY.pptx
LCU RDG 402  PRINCIPLES OF COMPUTED TOMOGRAPHY.pptxLCU RDG 402  PRINCIPLES OF COMPUTED TOMOGRAPHY.pptx
LCU RDG 402 PRINCIPLES OF COMPUTED TOMOGRAPHY.pptxEmmanuelOluseyi1
 
MC0086 Internal Assignment (SMU)
MC0086 Internal Assignment (SMU)MC0086 Internal Assignment (SMU)
MC0086 Internal Assignment (SMU)Krishan Pareek
 
CT based Image Guided Radiotherapy - Physics & QA
CT based Image Guided Radiotherapy - Physics & QACT based Image Guided Radiotherapy - Physics & QA
CT based Image Guided Radiotherapy - Physics & QASambasivaselli R
 
Coins Poster_Ivan Felix_Final_1
Coins Poster_Ivan Felix_Final_1Coins Poster_Ivan Felix_Final_1
Coins Poster_Ivan Felix_Final_1Ivan Felix
 
Super resolution imaging using frequency wavelets and three dimensional
Super resolution imaging using frequency wavelets and three dimensionalSuper resolution imaging using frequency wavelets and three dimensional
Super resolution imaging using frequency wavelets and three dimensionalIAEME Publication
 
5 spect pet & hybrid imaging
5 spect pet & hybrid imaging5 spect pet & hybrid imaging
5 spect pet & hybrid imagingazmal sarker
 
Implementation of Radon Transformation for Electrical Impedance Tomography (E...
Implementation of Radon Transformation for Electrical Impedance Tomography (E...Implementation of Radon Transformation for Electrical Impedance Tomography (E...
Implementation of Radon Transformation for Electrical Impedance Tomography (E...ijistjournal
 
ANALYSIS OF BIOMEDICAL IMAGE USING WAVELET TRANSFORM
ANALYSIS OF BIOMEDICAL IMAGE USING WAVELET TRANSFORMANALYSIS OF BIOMEDICAL IMAGE USING WAVELET TRANSFORM
ANALYSIS OF BIOMEDICAL IMAGE USING WAVELET TRANSFORMijiert bestjournal
 
Flat panel ---nihms864608
Flat panel ---nihms864608Flat panel ---nihms864608
Flat panel ---nihms864608Maria Vergakh
 
Laser Physics Department -ALNeelain University 5 th
Laser Physics Department -ALNeelain University 5 thLaser Physics Department -ALNeelain University 5 th
Laser Physics Department -ALNeelain University 5 thGazy Khatmi
 

Similar to Scattering optical tomography with discretized path integral (20)

Physics of Multidetector CT Scan
Physics of Multidetector CT ScanPhysics of Multidetector CT Scan
Physics of Multidetector CT Scan
 
Ct Basics
Ct BasicsCt Basics
Ct Basics
 
Purkinje imaging for crystalline lens density measurement
Purkinje imaging for crystalline lens density measurementPurkinje imaging for crystalline lens density measurement
Purkinje imaging for crystalline lens density measurement
 
computedtomography-170122041110.pdf
computedtomography-170122041110.pdfcomputedtomography-170122041110.pdf
computedtomography-170122041110.pdf
 
Computed tomography
Computed tomographyComputed tomography
Computed tomography
 
(ACCAS2016)Laser beam steering system for epiduroscopic laser treatment A fea...
(ACCAS2016)Laser beam steering system for epiduroscopic laser treatment A fea...(ACCAS2016)Laser beam steering system for epiduroscopic laser treatment A fea...
(ACCAS2016)Laser beam steering system for epiduroscopic laser treatment A fea...
 
A new hybrid method for the segmentation of the brain mris
A new hybrid method for the segmentation of the brain mrisA new hybrid method for the segmentation of the brain mris
A new hybrid method for the segmentation of the brain mris
 
LCU RDG 402 PRINCIPLES OF COMPUTED TOMOGRAPHY.pptx
LCU RDG 402  PRINCIPLES OF COMPUTED TOMOGRAPHY.pptxLCU RDG 402  PRINCIPLES OF COMPUTED TOMOGRAPHY.pptx
LCU RDG 402 PRINCIPLES OF COMPUTED TOMOGRAPHY.pptx
 
MC0086 Internal Assignment (SMU)
MC0086 Internal Assignment (SMU)MC0086 Internal Assignment (SMU)
MC0086 Internal Assignment (SMU)
 
CT based Image Guided Radiotherapy - Physics & QA
CT based Image Guided Radiotherapy - Physics & QACT based Image Guided Radiotherapy - Physics & QA
CT based Image Guided Radiotherapy - Physics & QA
 
Coins Poster_Ivan Felix_Final_1
Coins Poster_Ivan Felix_Final_1Coins Poster_Ivan Felix_Final_1
Coins Poster_Ivan Felix_Final_1
 
1.pdf
1.pdf1.pdf
1.pdf
 
Super resolution imaging using frequency wavelets and three dimensional
Super resolution imaging using frequency wavelets and three dimensionalSuper resolution imaging using frequency wavelets and three dimensional
Super resolution imaging using frequency wavelets and three dimensional
 
5 spect pet & hybrid imaging
5 spect pet & hybrid imaging5 spect pet & hybrid imaging
5 spect pet & hybrid imaging
 
Implementation of Radon Transformation for Electrical Impedance Tomography (E...
Implementation of Radon Transformation for Electrical Impedance Tomography (E...Implementation of Radon Transformation for Electrical Impedance Tomography (E...
Implementation of Radon Transformation for Electrical Impedance Tomography (E...
 
ANALYSIS OF BIOMEDICAL IMAGE USING WAVELET TRANSFORM
ANALYSIS OF BIOMEDICAL IMAGE USING WAVELET TRANSFORMANALYSIS OF BIOMEDICAL IMAGE USING WAVELET TRANSFORM
ANALYSIS OF BIOMEDICAL IMAGE USING WAVELET TRANSFORM
 
NIR imaging
NIR imaging NIR imaging
NIR imaging
 
Flat panel ---nihms864608
Flat panel ---nihms864608Flat panel ---nihms864608
Flat panel ---nihms864608
 
Laser Physics Department -ALNeelain University 5 th
Laser Physics Department -ALNeelain University 5 thLaser Physics Department -ALNeelain University 5 th
Laser Physics Department -ALNeelain University 5 th
 
D04432528
D04432528D04432528
D04432528
 

More from Toru Tamaki

論文紹介:Video-GroundingDINO: Towards Open-Vocabulary Spatio-Temporal Video Groun...
論文紹介:Video-GroundingDINO: Towards Open-Vocabulary Spatio-Temporal Video Groun...論文紹介:Video-GroundingDINO: Towards Open-Vocabulary Spatio-Temporal Video Groun...
論文紹介:Video-GroundingDINO: Towards Open-Vocabulary Spatio-Temporal Video Groun...Toru Tamaki
 
論文紹介:Selective Structured State-Spaces for Long-Form Video Understanding
論文紹介:Selective Structured State-Spaces for Long-Form Video Understanding論文紹介:Selective Structured State-Spaces for Long-Form Video Understanding
論文紹介:Selective Structured State-Spaces for Long-Form Video UnderstandingToru Tamaki
 
論文紹介:Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Gene...
論文紹介:Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Gene...論文紹介:Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Gene...
論文紹介:Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Gene...Toru Tamaki
 
論文紹介:Content-Aware Token Sharing for Efficient Semantic Segmentation With Vis...
論文紹介:Content-Aware Token Sharing for Efficient Semantic Segmentation With Vis...論文紹介:Content-Aware Token Sharing for Efficient Semantic Segmentation With Vis...
論文紹介:Content-Aware Token Sharing for Efficient Semantic Segmentation With Vis...Toru Tamaki
 
論文紹介:Automated Classification of Model Errors on ImageNet
論文紹介:Automated Classification of Model Errors on ImageNet論文紹介:Automated Classification of Model Errors on ImageNet
論文紹介:Automated Classification of Model Errors on ImageNetToru Tamaki
 
論文紹介:Semantic segmentation using Vision Transformers: A survey
論文紹介:Semantic segmentation using Vision Transformers: A survey論文紹介:Semantic segmentation using Vision Transformers: A survey
論文紹介:Semantic segmentation using Vision Transformers: A surveyToru Tamaki
 
論文紹介:MOSE: A New Dataset for Video Object Segmentation in Complex Scenes
論文紹介:MOSE: A New Dataset for Video Object Segmentation in Complex Scenes論文紹介:MOSE: A New Dataset for Video Object Segmentation in Complex Scenes
論文紹介:MOSE: A New Dataset for Video Object Segmentation in Complex ScenesToru Tamaki
 
論文紹介:MoLo: Motion-Augmented Long-Short Contrastive Learning for Few-Shot Acti...
論文紹介:MoLo: Motion-Augmented Long-Short Contrastive Learning for Few-Shot Acti...論文紹介:MoLo: Motion-Augmented Long-Short Contrastive Learning for Few-Shot Acti...
論文紹介:MoLo: Motion-Augmented Long-Short Contrastive Learning for Few-Shot Acti...Toru Tamaki
 
論文紹介:Tracking Anything with Decoupled Video Segmentation
論文紹介:Tracking Anything with Decoupled Video Segmentation論文紹介:Tracking Anything with Decoupled Video Segmentation
論文紹介:Tracking Anything with Decoupled Video SegmentationToru Tamaki
 
論文紹介:Real-Time Evaluation in Online Continual Learning: A New Hope
論文紹介:Real-Time Evaluation in Online Continual Learning: A New Hope論文紹介:Real-Time Evaluation in Online Continual Learning: A New Hope
論文紹介:Real-Time Evaluation in Online Continual Learning: A New HopeToru Tamaki
 
論文紹介:PointNet: Deep Learning on Point Sets for 3D Classification and Segmenta...
論文紹介:PointNet: Deep Learning on Point Sets for 3D Classification and Segmenta...論文紹介:PointNet: Deep Learning on Point Sets for 3D Classification and Segmenta...
論文紹介:PointNet: Deep Learning on Point Sets for 3D Classification and Segmenta...Toru Tamaki
 
論文紹介:Multitask Vision-Language Prompt Tuning
論文紹介:Multitask Vision-Language Prompt Tuning論文紹介:Multitask Vision-Language Prompt Tuning
論文紹介:Multitask Vision-Language Prompt TuningToru Tamaki
 
論文紹介:MovieCLIP: Visual Scene Recognition in Movies
論文紹介:MovieCLIP: Visual Scene Recognition in Movies論文紹介:MovieCLIP: Visual Scene Recognition in Movies
論文紹介:MovieCLIP: Visual Scene Recognition in MoviesToru Tamaki
 
論文紹介:Discovering Universal Geometry in Embeddings with ICA
論文紹介:Discovering Universal Geometry in Embeddings with ICA論文紹介:Discovering Universal Geometry in Embeddings with ICA
論文紹介:Discovering Universal Geometry in Embeddings with ICAToru Tamaki
 
論文紹介:Efficient Video Action Detection with Token Dropout and Context Refinement
論文紹介:Efficient Video Action Detection with Token Dropout and Context Refinement論文紹介:Efficient Video Action Detection with Token Dropout and Context Refinement
論文紹介:Efficient Video Action Detection with Token Dropout and Context RefinementToru Tamaki
 
論文紹介:Learning from Noisy Pseudo Labels for Semi-Supervised Temporal Action Lo...
論文紹介:Learning from Noisy Pseudo Labels for Semi-Supervised Temporal Action Lo...論文紹介:Learning from Noisy Pseudo Labels for Semi-Supervised Temporal Action Lo...
論文紹介:Learning from Noisy Pseudo Labels for Semi-Supervised Temporal Action Lo...Toru Tamaki
 
論文紹介:MeMViT: Memory-Augmented Multiscale Vision Transformer for Efficient Lon...
論文紹介:MeMViT: Memory-Augmented Multiscale Vision Transformer for Efficient Lon...論文紹介:MeMViT: Memory-Augmented Multiscale Vision Transformer for Efficient Lon...
論文紹介:MeMViT: Memory-Augmented Multiscale Vision Transformer for Efficient Lon...Toru Tamaki
 
論文紹介:Revealing the unseen: Benchmarking video action recognition under occlusion
論文紹介:Revealing the unseen: Benchmarking video action recognition under occlusion論文紹介:Revealing the unseen: Benchmarking video action recognition under occlusion
論文紹介:Revealing the unseen: Benchmarking video action recognition under occlusionToru Tamaki
 
論文紹介:Video Task Decathlon: Unifying Image and Video Tasks in Autonomous Driving
論文紹介:Video Task Decathlon: Unifying Image and Video Tasks in Autonomous Driving論文紹介:Video Task Decathlon: Unifying Image and Video Tasks in Autonomous Driving
論文紹介:Video Task Decathlon: Unifying Image and Video Tasks in Autonomous DrivingToru Tamaki
 
論文紹介:Spatio-Temporal Action Detection Under Large Motion
論文紹介:Spatio-Temporal Action Detection Under Large Motion論文紹介:Spatio-Temporal Action Detection Under Large Motion
論文紹介:Spatio-Temporal Action Detection Under Large MotionToru Tamaki
 

More from Toru Tamaki (20)

論文紹介:Video-GroundingDINO: Towards Open-Vocabulary Spatio-Temporal Video Groun...
論文紹介:Video-GroundingDINO: Towards Open-Vocabulary Spatio-Temporal Video Groun...論文紹介:Video-GroundingDINO: Towards Open-Vocabulary Spatio-Temporal Video Groun...
論文紹介:Video-GroundingDINO: Towards Open-Vocabulary Spatio-Temporal Video Groun...
 
論文紹介:Selective Structured State-Spaces for Long-Form Video Understanding
論文紹介:Selective Structured State-Spaces for Long-Form Video Understanding論文紹介:Selective Structured State-Spaces for Long-Form Video Understanding
論文紹介:Selective Structured State-Spaces for Long-Form Video Understanding
 
論文紹介:Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Gene...
論文紹介:Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Gene...論文紹介:Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Gene...
論文紹介:Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Gene...
 
論文紹介:Content-Aware Token Sharing for Efficient Semantic Segmentation With Vis...
論文紹介:Content-Aware Token Sharing for Efficient Semantic Segmentation With Vis...論文紹介:Content-Aware Token Sharing for Efficient Semantic Segmentation With Vis...
論文紹介:Content-Aware Token Sharing for Efficient Semantic Segmentation With Vis...
 
論文紹介:Automated Classification of Model Errors on ImageNet
論文紹介:Automated Classification of Model Errors on ImageNet論文紹介:Automated Classification of Model Errors on ImageNet
論文紹介:Automated Classification of Model Errors on ImageNet
 
論文紹介:Semantic segmentation using Vision Transformers: A survey
論文紹介:Semantic segmentation using Vision Transformers: A survey論文紹介:Semantic segmentation using Vision Transformers: A survey
論文紹介:Semantic segmentation using Vision Transformers: A survey
 
論文紹介:MOSE: A New Dataset for Video Object Segmentation in Complex Scenes
論文紹介:MOSE: A New Dataset for Video Object Segmentation in Complex Scenes論文紹介:MOSE: A New Dataset for Video Object Segmentation in Complex Scenes
論文紹介:MOSE: A New Dataset for Video Object Segmentation in Complex Scenes
 
論文紹介:MoLo: Motion-Augmented Long-Short Contrastive Learning for Few-Shot Acti...
論文紹介:MoLo: Motion-Augmented Long-Short Contrastive Learning for Few-Shot Acti...論文紹介:MoLo: Motion-Augmented Long-Short Contrastive Learning for Few-Shot Acti...
論文紹介:MoLo: Motion-Augmented Long-Short Contrastive Learning for Few-Shot Acti...
 
論文紹介:Tracking Anything with Decoupled Video Segmentation
論文紹介:Tracking Anything with Decoupled Video Segmentation論文紹介:Tracking Anything with Decoupled Video Segmentation
論文紹介:Tracking Anything with Decoupled Video Segmentation
 
論文紹介:Real-Time Evaluation in Online Continual Learning: A New Hope
論文紹介:Real-Time Evaluation in Online Continual Learning: A New Hope論文紹介:Real-Time Evaluation in Online Continual Learning: A New Hope
論文紹介:Real-Time Evaluation in Online Continual Learning: A New Hope
 
論文紹介:PointNet: Deep Learning on Point Sets for 3D Classification and Segmenta...
論文紹介:PointNet: Deep Learning on Point Sets for 3D Classification and Segmenta...論文紹介:PointNet: Deep Learning on Point Sets for 3D Classification and Segmenta...
論文紹介:PointNet: Deep Learning on Point Sets for 3D Classification and Segmenta...
 
論文紹介:Multitask Vision-Language Prompt Tuning
論文紹介:Multitask Vision-Language Prompt Tuning論文紹介:Multitask Vision-Language Prompt Tuning
論文紹介:Multitask Vision-Language Prompt Tuning
 
論文紹介:MovieCLIP: Visual Scene Recognition in Movies
論文紹介:MovieCLIP: Visual Scene Recognition in Movies論文紹介:MovieCLIP: Visual Scene Recognition in Movies
論文紹介:MovieCLIP: Visual Scene Recognition in Movies
 
論文紹介:Discovering Universal Geometry in Embeddings with ICA
論文紹介:Discovering Universal Geometry in Embeddings with ICA論文紹介:Discovering Universal Geometry in Embeddings with ICA
論文紹介:Discovering Universal Geometry in Embeddings with ICA
 
論文紹介:Efficient Video Action Detection with Token Dropout and Context Refinement
論文紹介:Efficient Video Action Detection with Token Dropout and Context Refinement論文紹介:Efficient Video Action Detection with Token Dropout and Context Refinement
論文紹介:Efficient Video Action Detection with Token Dropout and Context Refinement
 
論文紹介:Learning from Noisy Pseudo Labels for Semi-Supervised Temporal Action Lo...
論文紹介:Learning from Noisy Pseudo Labels for Semi-Supervised Temporal Action Lo...論文紹介:Learning from Noisy Pseudo Labels for Semi-Supervised Temporal Action Lo...
論文紹介:Learning from Noisy Pseudo Labels for Semi-Supervised Temporal Action Lo...
 
論文紹介:MeMViT: Memory-Augmented Multiscale Vision Transformer for Efficient Lon...
論文紹介:MeMViT: Memory-Augmented Multiscale Vision Transformer for Efficient Lon...論文紹介:MeMViT: Memory-Augmented Multiscale Vision Transformer for Efficient Lon...
論文紹介:MeMViT: Memory-Augmented Multiscale Vision Transformer for Efficient Lon...
 
論文紹介:Revealing the unseen: Benchmarking video action recognition under occlusion
論文紹介:Revealing the unseen: Benchmarking video action recognition under occlusion論文紹介:Revealing the unseen: Benchmarking video action recognition under occlusion
論文紹介:Revealing the unseen: Benchmarking video action recognition under occlusion
 
論文紹介:Video Task Decathlon: Unifying Image and Video Tasks in Autonomous Driving
論文紹介:Video Task Decathlon: Unifying Image and Video Tasks in Autonomous Driving論文紹介:Video Task Decathlon: Unifying Image and Video Tasks in Autonomous Driving
論文紹介:Video Task Decathlon: Unifying Image and Video Tasks in Autonomous Driving
 
論文紹介:Spatio-Temporal Action Detection Under Large Motion
論文紹介:Spatio-Temporal Action Detection Under Large Motion論文紹介:Spatio-Temporal Action Detection Under Large Motion
論文紹介:Spatio-Temporal Action Detection Under Large Motion
 

Recently uploaded

How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfOrbitshub
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdfSandro Moreira
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoffsammart93
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusZilliz
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...apidays
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...apidays
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...apidays
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Orbitshub
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamUiPathCommunity
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century educationjfdjdjcjdnsjd
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...apidays
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodJuan lago vázquez
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobeapidays
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...Zilliz
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native ApplicationsWSO2
 

Recently uploaded (20)

How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 

Scattering optical tomography with discretized path integral

  • 1. Scattering optical tomography with discretized path integral Toru Tamaki (Hiroshima University) Joint work with Bingzhi Yuan, Yasuhiro Mukaigawa, and many others
  • 2. Computed Tomography Multiplexing Radiography for Ultra-Fast Computed Tomography J. Zhang, G. Yang, Y. Lee, S. Chang, J. Lu, O. Zhou University of North Carolina at Chapel Hill https://www.aapm.org/meetings/07AM/VirtualPressRoom/LayLanguage/IIMultiplexing.asp http://www.sciencekids.co.nz/pictures/humanbody/braintomography.html
  • 4. Diffuse Optical Tomography (DOT) , − − 19 9 5 , , 3 , 3 , http://www.mext.go.jp/b_menu/shingi/gijyutu/gijyutu3/toushin/07091111/008.htm Input Light pulse Detector Detector Time Time Lee, Kijoon. “Optical Mammography: Diffuse Optical Imaging of Breast Cancer.” World Journal of Clinical Oncology 2.1 (2011): 64–72. PMC. Web. 8 June 2015. http://dx.doi.org/10.5306/wjco.v2.i1.64
  • 5. Our research target X-ray CT Our target DOT No Radiation Sharpness ( ) Scattering No Yes So much Method Radon trans. Ours PDE by FEM http://www.sciencekids.co.nz/pictures/humanbody/braintomography.html Lee, Kijoon. “Optical Mammography: Diffuse Optical Imaging of Breast Cancer.” World Journal of Clinical Oncology 2.1 (2011): 64–72. PMC. Web. 8 June 2015. http://dx.doi.org/10.5306/wjco.v2.i1.64
  • 7. Volumetric scattering in CG 1987 1993 1994 Tomoyuki Nishita, Yasuhiro Miyawaki, and Eihachiro Nakamae. 1987. A shading model for atmospheric scattering considering luminous intensity distribution of light sources. SIGGRAPH Comput. Graph. 21, 4 (August 1987), 303-310. DOI=10.1145/37402.37437 http://doi.acm.org/10.1145/37402.37437 Tomoyuki Nishita, Takao Sirai, Katsumi Tadamura, and Eihachiro Nakamae. 1993. Display of the earth taking into account atmospheric scattering. In Proceedings of the 20th annual conference on Computer graphics and interactive techniques (SIGGRAPH '93). ACM, New York, NY, USA, 175-182. DOI=10.1145/166117.166140 http://doi.acm.org/10.1145/166117.166140 Tomoyuki Nishita and Eihachiro Nakamae. 1994. Method of displaying optical effects within water using accumulation buffer. In Proceedings of the 21st annual conference on Computer graphics and interactive techniques (SIGGRAPH '94). ACM, New York, NY, USA, 373-379. DOI=10.1145/192161.192261 http://doi.acm.org/10.1145/192161.192261 Copyright © 2011 by the Association for Computing Machinery, Inc. (ACM).
  • 8. Tomoyuki Nishita, Yoshinori Dobashi, and Eihachiro Nakamae. 1996. Display of clouds taking into account multiple anisotropic scattering and sky light. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH '96). ACM, New York, NY, USA, 379-386. DOI=10.1145/237170.237277 http://doi.acm.org/10.1145/237170.237277 Yonghao Yue, Kei Iwasaki, Bing-Yu Chen, Yoshinori Dobashi, and Tomoyuki Nishita. 2010. Unbiased, adaptive stochastic sampling for rendering inhomogeneous participating media. ACM Trans. Graph. 29, 6, Article 177 (December 2010), 8 pages. In ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA '10). DOI=10.1145/1882261.1866199 http://doi.acm.org/10.1145/1882261.1866199 Volumetric scattering in CG 1996 2010 Copyright © 2011 by the Association for Computing Machinery, Inc. (ACM).
  • 9. Computer vision approach to the inverse problem Observed image Forward model Generated by Computer Graphics Image Computer Graphics GenerationAnalysis Computer Vision Model parameters 2 minimizing observations and model prediction
  • 10. Vision / CG approaches University of British Columbia oto of an acquisition rig for fluid phenomena, consisting of 5–16 strobe-synchronized con ne of the cameras. Right: Reconstruction of an unsteady two-phase flow at different points in e introduced in this paper manages to capture the fine-scale features in this challenging datas me, 6 min/frame. stic Tomography n 3D Imaging of Mixing Fluids Matthias B. Hullin Wolfgang Heidrich y of British Columbia mena, consisting of 5–16 strobe-synchronized consumer cameras. Middle: of an unsteady two-phase flow at different points in time. Note how the novel ture the fine-scale features in this challenging dataset. SAD regularizer, 100M tur- ight stic We ial- ude gh- ally, can hout 1 Introduction The capture of dynamic 3D phenomena has been the subject of con- siderable research in computer graphics, extending to both the scan- ning of deformable shapes, such as human bodies (e.g. [de Aguiar et al. 2007; de Aguiar et al. 2008]), faces (e.g. [Bickel et al. 2007; Alexander et al. 2009]), and garments (e.g. [White et al. 2007; Bradley et al. 2008]), as well as natural phenomena including liquid surfaces [Ihrke and Magnor 2004; Wang et al. 2009], gases [Atch- eson et al. 2008], and flames [Ihrke and Magnor 2004]. Access to this kind of data is not only useful for direct re-rendering, but also for deepening our understanding of a specific phenomenon. The data can be used to derive heuristic or data-driven models, or simply to gain a qualitative understanding of what a phenomenon reconstructed view (60◦) 90◦ multiplication flame sheet blobs input torch” dataset. The same two input images were used for all methods. ews of the reconstructed flame are shown. prove useful in more general contexts. This includes sparse- view tomography problems in medical diagnostics [17] and 0◦ reconstructed view (30◦) reconstructed view (60◦) 90◦ input multiplication flame sheet blobs multiplication flame sheet blobs input Figure 7. Views of three different reconstructions of the “torch” dataset. The same two input images were used for all methods. Since all methods reconstruct the input views, only novel views of the reconstructed flame are shown. reconstruction method (views) RMS error input all prove useful in more general contexts. This includes sparse- view tomography problems in medical diagnostics [17] and reconstructed view (30◦) reconstructed view (60◦) 90◦ multiplication flame sheet blobs multiplication flame sheet blobs input Views of three different reconstructions of the “torch” dataset. The same two input images were used for all methods. methods reconstruct the input views, only novel views of the reconstructed flame are shown. ruction method (views) RMS error input all prove useful in more general contexts. This includes sparse- view tomography problems in medical diagnostics [17] and ew (30◦) reconstructed view (60◦) 90◦ eet blobs multiplication flame sheet blobs input uctions of the “torch” dataset. The same two input images were used for all methods. ws, only novel views of the reconstructed flame are shown. RMS error nput all prove useful in more general contexts. This includes sparse- view tomography problems in medical diagnostics [17] and 900 30 60 input inputreconstructed Gregson+ 2012TOG Hasinoff and Kutulakos 2007PAMI No scattering Figure 2: Eight images captured during a single sweep of the laser plane. off the mirror, the laser is spread into a vertical sheet of light by the cylindrical lens. This sheet of light creates an illuminated plane of smoke, which the camera views from a perpendicular direction. During scanning, the galvanometer sweeps the laser sheet from the front to the back of the smoke volume, while the camera cap- tures images of the scattered light. The galvanometer then rapidly jumps back to the front of the smoke volume and the process is re- peated. In this work we capture 200 images during each volume scan. At 5000 frames per second, this allows the volume to be scanned at 25 Hz. We placed both the camera and galvanometer 3 meters from the working volume, and chose the camera lens, cylin- drical lens, and galvanometer scan angle so that the camera field of view and laser frustum were both 6 . This provided a working volume of approximately 30x30x30 cm. The galvanometer control software also triggers the camera, al- lowing the frame capture to be synchronized with the position of the laser slice. This assures that corresponding slices in successive captured volumes are sampled at the same position. The high frame rate and the small fraction of the total light scat- tered to the camera require a relatively powerful laser. We use an argon ion laser that produces 3 watts of light at a set of wavelengths closely clustered around 500 nm. 3.1 Calibration the laser scanner. Repeating this for another dot gives another such line, and intersecting the two lines determines Lc. Similarly, any two identified points on the laser line for a given slice s, together with Lc, determines the plane Ps of illumination. We observed that the intensity of illumination varied as a func- tion of slice s and of elevation angle f within each laser slice. To correct for this, we placed a uniform diffuse plane at an angle where it was visible to both the laser and the camera and captured a se- quence of frames. As the laser scans this reflectance standard, the observed intensities are directly proportional to the intensity of laser light for each laser plane and pixel position. Having calibrated the laser planes Ps and the camera position, we can find the 3D point v corresponding to each laser plane and pixel position, and apply the appropriate intensity compensation when computing densities along the ray from the laser position Lc through the point v. Al- though this corrects for the relative intensity differences across the laser frustum, we should note that we do not measure the absolute intensity of the laser. The resulting density fields are therefore only recovered up to an unknown scale factor. We have assumed that image intensity values can be directly in- terpreted as density values. This assumes that the effects of extinc- tion and multiple scattering within the volume are negligible. Al- though we found that this assumption held for the smoke volumes we captured, this assumption will fail when the densities become very high such as for thick smoke or dyes in water, or when the Smoke only Hawkins+ 2005TOGAn Inverse Problem Approach for Automatically Adjusting t Rendering Clouds Using Photographs Yoshinori Dobashi1,2 Wataru Iwasaki1 Ayumi Ono1 Tsuyoshi Yamamoto1 Yonghao 1 Hokkaido University 2 JST, CREST 3 The University (a) (b) (d) (e) Figure 1: Examples of our method. The parameters for rendering clouds are estimated from the real phot the top left corner of each image. The synthetic cumulonimbus clouds are rendered using the estimated par Abstract Clouds play an important role in creating realistic images of out- door scenes. Many methods have therefore been proposed for dis- playing realistic clouds. However, the realism of the resulting im- ages depends on many parameters used to render them and it is often difficult to adjust those parameters manually. This paper pro- Links: DL PDF 1 Introduction Clouds are important elements w door scenes to enhance realism. ten employed and the intensities into account the scattering and ab realistic clouds. However, one of Dobashi+ 2012TOG Narasimhan+ 2006SIGGRAPH Gkioulekasj+ 2013SIGGRAPHAsia Appears in the SIGGRA Inverse Volume Renderin Ioannis Gkioulekas Harvard University Shuang Zhao Cornell University Kavi Cornell Figure 1: Acquiring scattering parameters. Left: Samples of two m Uniform parameter estimation
  • 13. Types of Phase function Forward scattering Backward scattering Phase function Isotropic scattering input Output distribution
  • 16. Light in participating medium Attenuation Absorption Out-scattering In-scattering Multiple scattering Emission Usually ignored
  • 19. Attenuation = absorption + out-scattering Absorption Out-scattering Attenuation Extinction coefficient Exponential attenuation
  • 23. DOT with Diffuse Approximation In-scattering termAttenuation termTime-resolved observation PDE solved by FEM No forward scattering Blurred results Problem Diffuse Approximation (first order spherical harmonics approximation) http://commons.wikimedia.org/wiki/File:Harmoniki.png Main assumption
  • 25. Light Paths: from light sources to cameras Light source Camera / eye
  • 26. Light Paths: from light sources to cameras Light source Camera / eye
  • 27. Path (Ray) tracing: from cameras to light source Light source Camera / eye Paths
  • 28. Path integral approach introduced by Eric Veach (1997 Ph.D thesis) http://graphics.stanford.edu/papers/veach_thesis/ Pixel value of pixel j Paths of length kAll possible paths of length k 2 Path measure Product of measures at each vertex All possible paths of any length (path space) Monte Carlo Integration
  • 29. Path integral for participating media introduced by Mark Pauly (1999 Master thesis) http://graphics.stanford.edu/~mapauly/publications.html Pixel value of pixel j measure Area measure if surface Voxel measure if volume Metropolis Light Transport for Participating Media Mark Pauly, Thomas Kollig, Alexander Keller Eurographics Workshop on Rendering, 2000
  • 30. Details of paths in participating media Anders Wang Kristensen, Efficient Unbiased Rendering using Enlightened Local Path Sampling, Ph.D thesis, 2010 URL Attenuation Geometric term BRDF / scattering coeff, phase function surface inside Cos term All paths of length k measure Camera response function To be estimated!
  • 31. PROPOSED METHOD TO THE INVERSE PROBLEM
  • 32. Discretization. How? Extinction coefficient at voxel b Length that the path xi->xi+1 passes voxel b Z 1 0 t((1 s)xi + sxi+1)ds ⇡ X b t[b] dxi,xi+1 [b] = t · dxi,xi+1 Z 1 0 t((1 s)xi + sxi+1)ds ⇡ X b t[b] dxi,xi+1 [b] = t · dxi,xi+1 Z 1 0 t((1 s)xi + sxi+1)ds ⇡ X b t[b] dxi,xi+1 [b] = t · dxi,xi+1 vectorize Sparse vector of coefficients Sparse vector of path length Inner product of sparse vectors
  • 33. Discretization. How? Extinction coefficient at voxel b Length that the path xi->xi+1 passes voxel b Z 1 0 t((1 s)xi + sxi+1)ds ⇡ X b t[b] dxi,xi+1 [b] = t · dxi,xi+1 I = 1X k=1 X ⌦k Le(x0, x1) e t·dx0,x1 kx0 x1k2 "k 1Y i=1 s(xi)fp(xi 1, xi, xi+1) e t·dxi,xi+1 kxi xi+1k2 # We(xk 1, xk)dµk I = 1X k=1 X ⌦k Le(x0, x1) e R 1 0 t((1 s)x0+sx1)ds kx0 x1k2 "k 1Y i=1 s(xi)fp(xi 1, xi, xi+1) e R 1 0 t((1 s)xi+sxi+1)ds kxi xi+1k2 # We(xk 1, xk)dµk Z 1 0 t((1 s)xi + sxi+1)ds ⇡ X b t[b] dxi,xi+1 [b] = t · dxi,xi+1 Z 1 0 t((1 s)xi + sxi+1)ds ⇡ X b t[b] dxi,xi+1 [b] = t · dxi,xi+1 before after
  • 34. Addition of Vectorized paths I = 1X k=1 Le(x 1, x0) X ⌦k "k 1Y i=1 s(xi)fp(xi 1, xi, xi+1) # " e t·( Pk 1 i=0 dxi,xi+1 ) Qk 1 i=0 kxi xi+1k2/dVi # dA 1 dVk dAk+1 I = 1X k=1 Le(x 1, x0) X ⌦k Hk e t·Dk I =Le(x 1, x0) X ⌦ Hk e t·Dk I = 1X k=1 X ⌦k Le(x0, x1) e t·dx0,x1 kx0 x1k2 "k 1Y i=1 s(xi)fp(xi 1, xi, xi+1) e t·dxi,xi+1 kxi xi+1k2 # We(xk 1, xk)dµk Assumed to be known Everything else
  • 35. The forward model x_k-1 x_k x1 x0 Source location s Is,t =Le X ⌦ Hk(s,t) e t·Dk(s,t) Observation location t Observation at s,t x_k-1 x_k x1 x0 Source location s Observation location t x_k-1 x_k x1 x0 Source location s Observation location t
  • 36. The inverse problem t 0subject to Coefficient must be positivemodelobservation nstrained problem: Interior point introduce a constrained optimization with inequality constraints of the form; min t f0( t) subject to fi(x)  0, i = 1, · · · , m, (37) t 2 <N⇥M is a real vector and f0, · · · , fm : <N⇥M ! < are twice continuously differen- dea is to approximate it as an unconstrained problem. Using Lagrange multipliers, we can rite problem (37) as min t f0( t) + mX i=1 I(fi( t)), (38) Solved by the log-barrier interior point method Cost function with constraints The idea is to approximate it as an unconstrained problem. Using La first rewrite problem (37) as min t f0( t) + mX i=1 I(fi( t)), where I : < ! < is an indicator function which keeps the solution insid I(f) = ( 0, f  0 1, f > 0. The problem (38) now has no inequality constraints, while it is not diffe The barrier method26 is an interior point method which introduces a lo to approximate the indicator function I as follows; ˆI(f) = (1/t) log( f), where t > 0 is a parameter to adjust the accuracy of approximation. The to infinity rapidly as f goes close to 0 while it is close to 0 when f are fa is differentiable, we have min t f0( t) + mX i=1 (1/t) log( fi( t)), or equivalently, mX Update t (larger) Initial value t 0 repeat Solve Log-barrier functions by Quasi-Newton
  • 38. Proposed 2D layered model • Light scatters: – Layer by layer – At voxel centers – Forward only • Subject to a simplified phase function – With uniform/known scattering coefficients
  • 40. Different directions he pro- material a). The ficients coeffi- t much to lit a etector s from unction s set to Figure extinc- gray) is in the se part (voxels blurred mental (a) (b) (c) (d) IV. NUMERICAL SIMULATIONS A. Simple material We describe numerical simulation results of the pro- posed method for evaluation. A two-dimensional material is divided into 10⇥10 voxels as shown in Figure 4(a). The material has almost homogeneous extinction coefficients of value 0.05 except few voxels with much higher coeffi- cients of 0.2, which means those voxels absorb light much more than other voxels. A light source is assumed to lit a light at each of m positions to the top layer, and a detector observes the outgoing light at each of m positions from the bottom layer. The variance 2 of Gaussian function (1) for scattering from one layer to the next layer is set to 2. The upper bound u in Eq. (8) is set to 1. Estimated extinction coefficients are shown in Figure 4(b). The homogeneous part of the material with extinc- tion coefficients of 0.05 (voxels shaded in light gray) is reasonably estimated because estimated values fall in the range between 0.04 to 0.06. In contrast, the dense part of the material with extinction coefficients of 0.2 (voxels shaded in dark gray) is not estimated well but looks blurred vertically. This blur effect is due to the experimental setting that the light paths go vertically from top to bottom inside the material. This effect can be verified by simply rotating the setting by 90 degrees (now the configuration is a horizontal one) and applying the proposed method. Figure 4(c) shows the estimation result when the light source is on the left (a) (b) (c) (d) Figure 4: Simulation results of a material of size 10 ⇥ 10. (a) Ground truth of the distribution of extinction coeffi- cients shown as values in each voxel as well as by voxel colors. (b) Estimated extinction coefficients. A light source and detector are above and blow the media. (c) Estimated extinction coefficients. A light source and detector are left IV. NUMERICAL SIMULATIONS A. Simple material We describe numerical simulation results of the pro- posed method for evaluation. A two-dimensional material is divided into 10⇥10 voxels as shown in Figure 4(a). The material has almost homogeneous extinction coefficients of value 0.05 except few voxels with much higher coeffi- cients of 0.2, which means those voxels absorb light much more than other voxels. A light source is assumed to lit a light at each of m positions to the top layer, and a detector observes the outgoing light at each of m positions from the bottom layer. The variance 2 of Gaussian function (1) for scattering from one layer to the next layer is set to 2. The upper bound u in Eq. (8) is set to 1. Estimated extinction coefficients are shown in Figure 4(b). The homogeneous part of the material with extinc- tion coefficients of 0.05 (voxels shaded in light gray) is reasonably estimated because estimated values fall in the range between 0.04 to 0.06. In contrast, the dense part of the material with extinction coefficients of 0.2 (voxels shaded in dark gray) is not estimated well but looks blurred vertically. This blur effect is due to the experimental setting that the light paths go vertically from top to bottom inside the material. This effect can be verified by simply rotating the setting by 90 degrees (now the configuration is a horizontal one) and applying the proposed method. Figure 4(c) shows (a) (b) (c) (d) Figure 4: Simulation results of a material of size 10 ⇥ 10. (a) Ground truth of the distribution of extinction coeffi- cients shown as values in each voxel as well as by voxel colors. (b) Estimated extinction coefficients. A light source IV. NUMERICAL SIMULATIONS A. Simple material We describe numerical simulation results of the pro- posed method for evaluation. A two-dimensional material is divided into 10⇥10 voxels as shown in Figure 4(a). The material has almost homogeneous extinction coefficients of value 0.05 except few voxels with much higher coeffi- cients of 0.2, which means those voxels absorb light much more than other voxels. A light source is assumed to lit a light at each of m positions to the top layer, and a detector observes the outgoing light at each of m positions from the bottom layer. The variance 2 of Gaussian function (1) for scattering from one layer to the next layer is set to 2. The upper bound u in Eq. (8) is set to 1. Estimated extinction coefficients are shown in Figure 4(b). The homogeneous part of the material with extinc- tion coefficients of 0.05 (voxels shaded in light gray) is reasonably estimated because estimated values fall in the range between 0.04 to 0.06. In contrast, the dense part of the material with extinction coefficients of 0.2 (voxels shaded in dark gray) is not estimated well but looks blurred vertically. This blur effect is due to the experimental setting that the light paths go vertically from top to bottom inside the material. This effect can be verified by simply rotating the setting by 90 degrees (now the configuration is a horizontal one) and applying the proposed method. Figure 4(c) shows the estimation result when the light source is on the left side of the material, and the detector is on the right side. The estimated extinction coefficients are blurred now horizontally because light paths go from left to right. A simple trick to integrate these two results is to take the average of them, which is shown in Figure 4(d). The (a) (b) (c) (d) Figure 4: Simulation results of a material of size 10 ⇥ 10. (a) Ground truth of the distribution of extinction coeffi- cients shown as values in each voxel as well as by voxel colors. (b) Estimated extinction coefficients. A light source and detector are above and blow the media. (c) Estimated extinction coefficients. A light source and detector are left and right sides of the media. (d) Average of (b) and (c). Cost function values over iterations are shown in Figure Averaging Vertical estimates Horizontal EstimatesToru Tamaki, Bingzhi Yuan, Bisser Raytchev, Kazufumi Kaneda, Yasuhiro Mukaigawa, "Multiple-scattering Optical Tomography with Layered Material," The 9th International Conference on Signal-Image Technology and Internet- Based Systems (SITIS 2013), December 2-5, 2013, Kyoto, Japan. (2013 12)
  • 41. Joint optimization he pro- material a). The ficients coeffi- t much to lit a etector s from unction s set to Figure extinc- gray) is in the se part (voxels blurred mental (a) (b) (c) (d) Bingzhi Yuan, Toru Tamaki, Takahiro Kushida, Bisser Raytchev, Kazufumi Kaneda, Yasuhiro Mukaigawa and Hiroyuki Kubo, "Layered optical tomography of multiple scattering media with combined constraint optimization", in Proc. of FCV 2015 : 21st Korea-Japan joint Workshop on Frontiers of Computer Vision, January 28--30, 2015, Shinan Beach Hotel, Mokpo, KOREA.
  • 42. results of numerical simulation Ground truth (a) (b) (c) (d) (e) (a) (b) (c) (d) (e) • 20x20 grid of 20[mm]x20[mm] • Extinction coefficients: between 1.05 and 1.55 [mm-1] • Scattering coefficients (known): 1 [mm-1] • Phase function (known): a simplified approximation • Fixed 742 paths for each s,t (exhaustive with th.) • Joint optimization with interior point method Estimated results
  • 44. Comparison with DOT GT Ours DOT (GN) DOT (PD) (a) (b) (c) (d) (e) ig 11 Numerical simulation results for a grid of size 24 ⇥ 24 with different inverse solver. Darker shades of gra Ground Truth Ours DOT (Gauss-Newton) DOT (Primal-Dual interior point) • 24x24 grid of 24[mm]x24[mm] • 24x24x2 = 1152 triangle meshes for FEM • 16 light sources and 16 detectors around the square • With EIDORS (Electrical Impedance Tomography and Diffuse Optical Tomography Reconstruction Software) eidors3d.sourceforge.net
  • 45. Accuracy and cost (a) (b) (c) (d) (e) RMSE Ours 0.007662 0.01244 0.026602 0.021442 0.051152 2 = 0.4 (0.730%) (1.18%) (2.53%) (2.04%) (4.87%) DOT (GN) 0.053037 0.060597 0.7605 0.059534 0.0855 (5.05%) (5.77%) (7.53%) (5.67%) (8.14%) DOT (PD) 0.052466 0.0626 0.081081 0.066042 0.080798 (5.25%) (5.97%) (8.11%) (6.60%) (8.08%) Computation time [s] Ours 257 217 382 306 5042 = 0.4 DOT (GN) 0.397 0.390 0.407 0.404 0.453 DOT (PD) 1.11 1.09 1.14 1.08 1.15 Table 2 RMSEs and computation time of the numerical simulations for five different media (a)–(e) with grid size of 4 ⇥ 24 with our proposed method and DOT with two solvers. Numbers in the brackets are relative errors of RMSE o the background extinction coefficient values (i.e., 1.05). GT 2 = 0.2 2 = 0.4 (a) (b) (c) (d) (e) Fig 7 Numerical simulation results for a grid of size 20 ⇥ 20. Darker shades of gray represent larger values (more light is absorbed at the voxel). The graybar shows the preojetcion from gray scale to value. The first row shows the groundtruth of the five different media (a)–(e) used for the simulation. The second row shows the estimated results when the 2 in the Gaussian phase function equals to 0.2. The last row shows the estimated results when the 2 = 0.4. in the middle and bottom rows, and the root-mean-squared error (RMSE) shown in Table 5 is small enough with 0.0075/1.05 = 0.7% of coefficient values. The other media, shown in columns (b)– (e), have more complex distributions of the extinction coefficients. In the top part of Table 5, we use the RMSEs to show the quality of the estimated results. The ratio in the brackets shows the relative magnitude of RMSE compared with the majority extinction value or the background in the Fig 7 Numerical simulation results for a grid of size 20⇥20. Darker shades of gray represent larger values (more light is absorbed at the voxel). The bars on the side shows extinction coefficient values in greyscale. The first row shows the ground truth of the five different media (a)–(e) used for the simulation. The second and third rows show estimated results for 2 = 0.2 and 2 = 0.4, respectively, of the Gaussian phase function. much higher coefficients of 1.2 (voxels shaded in dark gray), which means that those voxels ab- sorb much more light than other voxels. The coefficients are estimated reasonably well as shown in the middle and bottom rows, and the root-mean-squared error (RMSE) shown in Table 1 is small enough with the relative error of 0.0075/1.05 = 0.7% to the background coefficient value. The Our implementation: MATLAB
  • 47. Devices for Scattering Tomography Y.Ishii, T.Arai, Y.Mukaigawa, J.Tagawa, Y.Yagi, "Scattering Tomography by Monte Carlo Voting", Proc. IAPR International Conference on Machine Vision Applications (MVA2013), May. 2013. ubstances ght scattering Bread ifficult to detect foreign substances use of the influence of light scattering ),( xT ), x x + … + Needle ),( xE Campus & Location ロゴタイプ・ロゴマーク/学旗/同窓会 学 旗 ロゴタイプ・ロゴマーク ロゴタイプ ロゴマーク s & Location イプ・ロゴマーク/学旗/同窓会 ロゴマーク ロゴタイプ ロゴマーク
  • 48. Devices for Scattering Tomography Ryuichi Akashi, Hajime Nagahara, Yasuhiro Mukaigawa and Rin-ichiro Taniguchi, Scattering Tomography Using Ellipsoidal Mirror, FCV2015. . 8. Experimental results for an object with two obstacles. (Left) actual figuration; (Middle) image from wrap-around viewing; (Right) image from gle-direction viewing. Camera Projector Ellipsoidal mirror Fig. 11. Projector-camera system attered by and passes through the target object is reflected ck and directed toward the camera lens at the other focal int. In consequence of this setup, we can observe scattering (a) Slit board (b) Grid pattern Fig. 12. (a) Slit board and (b) projected grid pattern. Camera Projector Screen Slit Beam splitter Fig. 13. Optical layout during alignment uring aligning, we replaced the ellipsoidal mirror with een and placed the slit board in front of the screen. 13 shows the optical layout during the camera-projector (a) Before alignment Fig. 14. Alignm Fig. 15. Cylindrical scattering ob- ject made of silicon resin with a single metal obstacle. Fig wr and metal wires as obstacles. Th in shape with diameter of 27 mm as the darkened filament inside o We give an image for the estim bution for this real object (Fig. 16 Fig. 10. Configuration of the optical system for wrap-around viewing pattern grid pattern. Screen Slit (a) Before alignment (b) After alignment Fig. 14. Alignment result Fig. 15. Cylindrical scattering ob- ject made of silicon resin with a Fig. 16. Image obtained using Real object Estimation
  • 49. Summary • Optical tomography for scattering media • Observation model: path integral • Least-square approach with constraints • Solver: interior point method • Experiments: numerical simulation • Future work: a lotGT GT 2 = 0.2