1. Theory of Bouguet’s MatLab Camera Calibration
Toolbox
Yuji Oyamada
1 HVRL, University
2 Chair for Computer Aided Medical Procedure (CAMP)
Technische Universit¨t M¨nchen
a u
June 26, 2012
2. Introduction Functions Theory
MatLab Camera Calibration Toolbox
• Calibration using a planar calibration object.
• Points correspondence requires manual click on object’s
corners.
• Core calibration part is fully automatic.
• C implementation is included in OpenCV.
• Web
3. Introduction Functions Theory
Functions
• Extract grid corners: Points correspondence
• Calibration: Core calibration
• Show Extrinsic: Visualization of estimated extrinsic
parameters
• Reproject on images: Visualization of reprojection error
• Analyse error:
• Recomp. corners: Recompute the corners using estimated
parameters
4. Introduction Functions Theory
Extract grid corners
Input:
• n sq {x,y}: Number of square along {x,y} axis
• d{X,Y}: Size of square in m unit.
• I: Input image
Output:
• {x i}, i = 1, . . . , N: Set of detected 2D points.
• {X i}, i = 1, . . . , N: Set of known 3D points.
where x i and X i have Mi elements as
x i(1) · · · x i(j) · · · x i(Mi )
xi= and
y i(1) · · · y i(j) · · · y i(Mi )
X i(1) · · · X i(j) · · · X i(Mi )
Xi= Y i(1) · · · Y i(j) · · · Y i(Mi )
Z i(1) · · · Z i(j) · · · Z i(Mi )
5. Introduction Functions Theory
Calibration
Input:
• {x i}: Set of detected 2D points.
• {X i}: Set of known 3D points.
Output:
• Intrinsic parameters
• Extrinsic parameters
Method:
1. Initialize intrinsic and extrinsic params.
2. Non-linear optimization to refine all parameters.
6. Introduction Functions Theory
Camera parameters
Intrinsic parameters:
• fc ∈ R2 : Camera focal length
• cc ∈ R2 : Principal point coordinates
• alpha c ∈ R1 : Skew coefficient
• kc ∈ R5 : Distortion coefficients
• KK ∈ R3×3 : The camera matrix (containing fc, cc, alpha c)
Extrinsic parameters for i-th image:
• omc i ∈ R3 : Rotation angles
• Tc i ∈ R3 : Translation vectors
• Rc i ∈ R3×3 : Rotation matrix computed as
Rc i = rodrigues(omc i)
7. Introduction Functions Theory
Initialization 1/2: Intrinsic parameters
Input
• {x i}: Set of detected 2D points.
• {X i}: Set of known 3D points.
Output
• Initialized intrinsic parameters
• Initialized extrinsic parameters
8. Introduction Functions Theory
Initialization 1/2: Intrinsic parameters
1. Roughly compute intrinsic params.
2. Refine intrinsic params.
3. Compute extrinsic params.
Step1:
• fc: Computed based on all the vanishing points.
• cc: Center of image.
• alpha c = 0.
• kc = 0.
10. Introduction Functions Theory
Initialization 2/2: Extrinsic parameters
Step3: Compute each extrinsic parameters from estimated A and
pre-computed Hk as
rk,1 = λk A−1 hk,1
rk,2 = λk A−1 hk,2
rk,3 = rk,1 × rk,2
tk = λk A−1 hk,3
where
1 1
λk = −1
= −1
||A hk,1 || ||A hk,2 ||
11. Introduction Functions Theory
Non-linear optimization
Solves the following equations:
2
p = arg min x − f (p)
ˆ 2
p
where p = {fc, cc, alpha c, kc, {omc i}, {Tc i}} and the function
f projects the known 3D points X onto 2D image plane with the
argument p.
ˆ
Sparse Levenberg-Marquardt algorithm iteratively updates p with
ˆ
an initial estimate p0 as
2
while( x − f (ˆ i−1 )
p 2
> )
ˆ ˆ
pi = pi−1 + ∆pi ;
i = i + 1;
12. Introduction Functions Theory
Image projection
Given
• xp = (xp , yp , 1) : Pixel coordinate of a point on the image
plane.
• xd = (xd , yd , 1) : Normalized coordinate distorted by lens.
• xn = (xn , yn , 1) : Normalized pinhole image projection.
• k ∈ R5 : Lens distortion parameters.
• KK ∈ R3×3 : Camera calibration matrix.
where
fc(1) alpha c(1) cc(1)
KK = 0 fc(2) cc(2)
0 0 1
xp = KKxd = KK(cdistxn + delta x)
13. Introduction Functions Theory
Lens distortion
Given
• xn = x: Normalized pinhole image projection.
• k ∈ R5 : Lens distortion parameters.
Compute
• xd = xd: Normalized coordinate distorted by lens.
• cdist: Radial distortion
• delta x: Tangential distortion
xd = cdist ∗ x + delta x
cdist ∗ x + delta x(1)
=
cdist ∗ y + delta x(2)
14. Introduction Functions Theory
Radial distortion
x ∗ cdist
xd1 =
y ∗ cdist
cdist = 1 + k(1) ∗ r 2 + k(2) ∗ r 4 + k(5) ∗ r 6
where
r 2 = xi2 + yi2 ,
r 4 = r 22 ,
r 6 = r 23
15. Introduction Functions Theory
Tangential distortion
k(3) ∗ a1 + k(4) ∗ a2
delta x =
k(3) ∗ a3 + k(4) ∗ a1
where
a1 = 2xi yi
a2 = r 2 + 2xi2
a3 = r 2 + 2yi2
16. Introduction Functions Theory
Closed form solution for initialization
X
u Y X
s v = A r1 r2 r3 t = A r1 r2 t Y
0
1 1
1
X
=H Y
1
Since r1 and r2 are orthonormal, we have following two constraints:
h1 A− A−1 h2 = 0
h1 A− A−1 h1 = h2 A− A−1 h2 ,
where A− A−1 describes the image of the absolute conic.
17. Introduction Functions Theory
Closed form solution for initialization
Let
B11 B21 B31
B = A− A−1 ≡ B12 B22 B32
B13 B23 B33
1
α2
− αγβ2
v0 γ−u0 β
α2 β
− γ
γ2 1 γ(v0 γ−u0 β) v
= α2 β α2 β 2
+ β2 − α2 β 2 − β0
2
2
(v0 γ−u0 β)2 v0
v0 γ−u0 β
α2 β
− γ(v0 γ−u0 β) −
α2 β 2
v0
β2 α2 β 2
+ β2 + 1
Note that B is symmetric, defined by a 6D vector
b ≡ B11 , B12 , B22 , B13 , B23 , B3
18. Introduction Functions Theory
Closed form solution for initialization
Let the i-th column vector of H be hi = hi1 , hi2 , hi3 .
Then, we have following linear equation
hi Bhj = vij b
where
vij = [hi1 hj1 , hi1 hj2 + hi2 hj1 , hi2 hj2
hi3 hj1 + hi1 hj3 , hi3 hj2 + hi2 hj3 , hi3 hj3 ]
19. Introduction Functions Theory
Closed form solution for initialization
The above two constraints can be rewritten as two homogeneous
equation w.r.t. unknown b as
v12
b=0
(v11 − v22 )
Given n images, we have Vb = 0, where V ∈ R2n×6 .
20. Introduction Functions Theory
Least Squares
• Let f (·) a function projecting 3D points X onto 2D points x
with projection parameters p as bmx = f (p) = pX.
• Our task is to find an optimal p such that minimizes
x − pX 2 .
2
In the case of non-linear function f (·), we iteratively update the
ˆ ˆ
estimate p with an initial estimate p0 as
2
while( x − pX 2 > )
ˆ ˆ
pi = pi−1 + ∆pi ;
i = i + 1;
21. Introduction Functions Theory
Least Squares: Newton’s method
ˆ
Given a previous estimate pi , estimate an update ∆i s.t.
2
∆i = arg min f (ˆ i + ∆) − X
p 2
∆
2
= arg min f (ˆ i ) + J∆ − X
p 2
∆
2
= arg min i + J∆ 2
∆
∂f
where J = ∂p .
Then, solve the normal equations
J J∆ = −J i
22. Introduction Functions Theory
Levenberg-Marquardt (LM) iteration
The normal equation is replaced by the augmented normal
equations as
J J∆ = −J i
→ (J J + λI)∆ = −J i
where I denotes the identity matrix.
An initial value of λ is 10−3 times the average of the diagonal
elements of N = J J.
23. Introduction Functions Theory
Sparse LM
• LM is suitable for minimization w.r.t. a small number of
parameters.
• The central step of LM, solving the normal equations,
• has complexity N 3 in the number of parameters and
• is repeated many times.
• The normal equation matrix has a certain sparse block
structure.
24. Introduction Functions Theory
Sparse LM
• Let p ∈ RM be the parameter vector that is able to be
partitioned into parameter vectors as p = (a , b ) .
• Given a measurement vector x ∈ RN
• Let x be the covariance matrix for the measurement vector.
• A general function f : RM → RN takes p to the estimated
measurement vector ˆ = f (p).
x
• denotes the difference x − ˆ between the measured and the
x
estimated vectors.
25. Introduction Functions Theory
Sparse LM
The set of equations Jδ = solved as the central step in the LM
has the form
δa
Jδ = [A|B] = .
δb
−1 −1
Then, the normal equations J x Jδ = J x to be solved
at each step of LM are of the form
−1 −1 −1
A x A A x B δa A x
−1 −1 = −1
B x A B x B
δb B x
26. Introduction Functions Theory
Sparse LM
Let
−1
• U=A x A
−1
• W=A x B
−1
• V=B x B
and ·∗ denotes augmented matrix by λ.
The normal equations are rewritten as
U∗ W δa A
=
W V∗ δb B
U∗ − WV∗−1 W 0 δa A − WV∗−1 B
→ =
W V∗ δb B
This results in the elimination of the top right hand block.
27. Introduction Functions Theory
Sparse LM
The top half of this set of equations is
(U∗ − WV∗−1 W )δa = A − WV∗−1 B
Subsequently, the value of δa may be found by back-substitution,
giving
V ∗ δb = B − W δa
28. Introduction Functions Theory
Sparse LM
p = (a , b ) , where a = (fc , cc , alpha c , kc ) and
b = ({omc i Tc i })
The Jacobian matrix is
∂ˆ
x ∂ˆ ∂ˆ
x x
J= = , = [A, B]
∂p ∂a ∂b
where
∂ˆ
x ∂ˆ ∂ˆ
x x ∂ˆ
x ∂ˆ
x
= , , ,
∂a ∂fc ∂cc ∂alpha c ∂kc
∂ˆ
x ∂ˆx ∂ˆ
x ∂ˆ
x ∂ˆ
x ∂ˆ
x ∂ˆ
x
= , ,··· , ··· ,
∂b ∂omc 1 ∂Tc 1 ∂omc i ∂Tc i ∂omc N ∂Tc N
29. Introduction Functions Theory
Sparse LM
The normal equation is rewritten as
A A
A B + λI ∆p = − x
B B
N N
A Ai Ai Bi
i=1 i
+ λI ∆p = − A x
i=1
→ N
N B x
Bi Ai Bi Bi
i=1 i=1
30. Introduction Functions Theory
Sparse LM
N
Ai Ai A1 B1 · · · Ai Bi ··· AN BN
i=1
B1 A1 B1 B1
J J=
.
. ..
. .
Bi Ai Bi Bi
.
. ..
. .
BN AN BN BN
31. Introduction Functions Theory
Sparse LM
N
Ai x
i=1
B1 x
J =
.
.
x .
Bi x
.
.
.
BN x
32. Introduction Functions Theory
Sparse LM
When each image has different number of corresponding points
(Mi = Mj , if i = j), each Ai and Bi have different size as
33. Introduction Functions Theory
Sparse LM
However, the difference does not matter because
A A ∈ Rdint ×dint
A B ∈ Rdint ×dex
B A ∈ Rdex ×dint
B B ∈ Rdex ×dex
where dint denotes dimension of intrinsic params and dex denotes
dimension of extrinsic params.