Vision par ordinateur Géométrie épipolaire Frédéric Devernay Avec des transparents de Marc Pollefeys
Epipolar geometry Underlying structure in set of matches for rigid scenes
L2 C1
m1
lT 1 l2 mT2 F m1 = 0
!
π
Fundamental matrix (3x3 rank 2 matrix)
M
L1
l1 e1 e2
m2
l2
C2
1. Computable from corresponding Canonical representation: points P = [I | 0] P" = [[e"]# F + e"v T | e"] 2. Simplifies matching 3. Allows to detect wrong matches 4. Related to calibration
!
Epipolar geometry Π P
l1 = e1 " x1 C1
L2 x1
M
L1
l1 lT 1 l2
" = P1T l1 = P2T l2
e1
T +T 2 2 21
x Px PF[ex11]"=x10= 0 Fundamental matrix (3x3 rank 2 matrix)
e2
! x2
l2
+T T 2 1 1
l2 = P P l
C2
lT2 x 2 = 0
The projective reconstruction theorem If a set of point correspondences in two views determine the fundamental matrix uniquely, then the scene and cameras may be reconstructed from these correspondences alone, and any two such reconstructions from these correspondences are projectively equivalent allows reconstruction from pair of uncalibrated images!
Properties of the fundamental matrix
Computation of F • • • •
Linear (8-point) Minimal (7-point) Robust (RANSAC) Non-linear refinement (MLE, …)
• Practical approach
Epipolar geometry: basic equation x"T Fx = 0 x "xf11 + x "yf12 + x "f13 + y "xf 21 + y "yf 22 + y "f 23 + xf 31 + yf 32 + f 33 = 0
!
separate known from unknown
!
T
[ x "x, x "y, x ", y "x, y "y, y ", x, y,1][ f11, f12, f13, f 21, f 22 , f 23, f 31, f 32 , f 33 ] = 0 (data) (unknowns) (linear) # x1"x1 x1"y1 x1" y1"x1 y1"y1 y1" x1 y1 1& % ( M M M M M M M M(f = 0 % M %$x "n x n x "n y n x "n y "n x n y "n y n y "n x n y n 1('
!
Af = 0 !
the NOT normalized 8-point algorithm
# x1 x1" % % x 2 x "2 % M % $x n x "n
~10000
y1 x1" y 2 x "2 M y n x "n
~10000
!
!
x1" x "2 M x "n
~100
x1 y1" x 2 y "2 M x n y "n
~10000
y1 y1" y 2 y "2 M y n y "n
~10000
y1" y "2 M y "n
x1 x2 M xn
y1 y2 M yn
~100 ~100 ~100
Orders of magnitude difference between column of data matrix → least-squares yields poor results
# f11 & % ( % f12 ( % f13 ( 1&% ( (% f 21( 1( % f 22 ( = 0 M(% ( (% f 23 ( 1' 1 % f 31( %f ( % 32 ( %$ f 33 ('
the normalized 8-point algorithm
Transform image to ~[-1,1]x[-1,1]
(0,500)
(700,500)
(-1,1) # 2 % 700 % % % % $
(0,0)
0 2 500
(700,0)
& "1( ( "1( ( 1( '
(1,1)
(0,0) (-1,-1)
!
normalized least squares yields good results (Hartley, PAMI´97)
(1,-1)
the singularity constraint e "T F = 0
Fe = 0
detF = 0
rank F = 2
SVD from linearly computed F matrix (rank 3)
!
!
! ! & !#"1 % ( T T T T F = U% "2 V = U " V + U " V + U " V 1 1 1 2 2 2 3 3 3 ( %$ " 3 (' Compute closest rank-2 approximation min F - F " F !
$#1 ' & ) T T T " F = U& # 2 )V = U # V + U # V 2 2 2 ! 1 1 1 &% 0)(
F vs. F'
the minimum case – 7 point correspondences # x1"x1 % % M %$x "7 x 7
x1"y1 M x "7 y 7
x1" M x "7
y1"x1 M y "7 x 7
y1"y1 M y "7 y 7
A = U 7x7diag("1,...," 7 ,0,0)V9x9
y1" M y "7 T
" A[V8V9 ] = 0 9x2
! !
! !
y1 1& ( M M(f = 0 y 7 1('
x1 M x7
(e.g.V
T
x i (F1 + "F2 )x i = 0,#i = 1...7 !
one parameter family of solutions but F1+λF2 not automatically rank 2
T
V8 = [000000010 ]T )
the minimum case – impose rank 2 σ3 (obtain 1 or 3 solutions)
F7pts F F1
F2
det(F1 + "F2 ) = a3 "3 + a2 "2 + a1" + a0 = 0
(cubic equation)
det(F1 + "F2 ) = det F2 det(F2-1F1 + "I) = 0 (det (AB) = det (A).det (B))
! !
-1
Compute possible λ as eigenvalues of !F2 (only real solutions are potential solutions)
F1
Minimal solution for calibrated cameras: 5-point
!
Robust estimation • What if set of matches contains gross outliers? (to keep things simple let’s consider line fitting first)
RANSAC (RANdom Sampling Consensus) Objective Robust fit of model to data set S which contains outliers Algorithm (i) Randomly select a sample of s data points from S and instantiate the model from this subset. (ii) Determine the set of data points Si which are within a distance threshold t of the model. The set Si is the consensus set of samples and defines the inliers of S. (iii) If the subset of Si is greater than some threshold T, reestimate the model using all the points in Si and terminate (iv) If the size of Si is less than T, select a new subset and repeat the above. (v) After N trials the largest consensus set Si is selected, and the model is re-estimated using all the points in the subset Si
Distance threshold Choose t so probability for inlier is α (e.g. 0.95) • Often empirically 2 • Zero-mean Gaussian noise σ then d ! follows ! m2 distribution with m=codimension of model (dimension+codimension=dimension space) Codimension
Model
t2
1
line,F
3.84σ2
2
H,P
5.99σ2
3
T
7.81σ2
How many samples? Choose N so that, with probability p, at least one random sample is free from outliers. e.g. p=0.99 s N
(1" (1" e) )
= 1" p
(
N = log(1" p) /log 1" (1" e)
s
)
proportion of outliers e
! !
s 2 3 4 5 6 7 8
5% 2 3 3 4 4 4 5
10% 3 4 5 6 7 8 9
20% 5 7 9 12 16 20 26
25% 6 9 13 17 24 33 44
30% 7 11 17 26 37 54 78
40% 11 19 34 57 97 163 272
50% 17 35 72 146 293 588 1177
Note: Assumes that inliers allow to identify other inliers
Acceptable consensus set? • Typically, terminate when inlier ratio reaches expected ratio of inliers
T = (1" e) n
!
Adaptively determining the number of samples e is often unknown a priori, so pick worst case, i.e. 0, and adapt if more inliers are found, e.g. 80% would yield e=0.2 • N=∞, sample_count =0 • While N >sample_count repeat – – – –
Choose a sample and count the number of inliers Set e=1-(number of inliers)/(total number of points) s N = log(1" p) /log(1" (1" e) ) Recompute N from e Increment the sample_count by 1
• Terminate
(
!
)
Other robust algorithms • RANSAC maximizes number of inliers • LMedS minimizes median error inlier
percent ile 100%
50% residual ( pixels) 1 .2 5
• Not recommended: case deletion, iterative least-squares, etc.
Non-linear refinment
Geometric distance Gold standard Symmetric epipolar distance
Gold standard Maximum Likelihood Estimation (= least-squares for Gaussian noise) 2
" ^ % " ^ % ) d$# xi,xi '& + d$# x(i, x(i '& i
2
^
"
T ^
subject to x F x = 0
Initialize: normalized 8-point, (P,P‘) from F, reconstruct Xi Parameterize:
! P = [I | 0], P" = [M | t],X i
!
^
^
(overparametrized)
x i = PX i ,x i = P"X i Minimize cost using Levenberg-Marquardt ! (preferably sparse LM, e.g. see H&Z)
!
Gold standard Alternative, minimal parametrization (with a=1)
(note (x,y,1) and (x‘,y‘,1) are epipoles) problems: • a=0 → pick largest of a,b,c,d to fix to 1 • epipole at infinity → pick largest of x,y,w and of x’,y’,w’ 4x3x3=36 parametrizations! reparametrize at every iteration, to be sure
Symmetric epipolar error
# d(x",Fx ) i
i
i
2
+ d ( x i ,F x"i ) T
2
# & 1 1 ( = ) x"T Fx% + 2 2 2 2 % x"T F + x"T F Fx)1 + (Fx) 2 ( ( ( ) ( ) $ ' 1 2
! !
Some experiments:
Some experiments:
Some experiments:
Some experiments:
Residual error:
# d(x",Fx ) i
2
i
i
(for all points!)
+ d ( x i ,F x"i ) T
2
Recommendations: 1. Do not use unnormalized algorithms 2. Quick and easy to implement: 8-point normalized 3. Better: enforce rank-2 constraint during minimization 4. Best: Maximum Likelihood Estimation (minimal parameterization, sparse implementation)
Automatic computation of F Step 1. Extract features Step 2. Compute a set of potential matches Step 3. do
}
Step 3.1 select minimal sample (i.e. 7 matches) Step 3.2 compute solution(s) for F Step 3.3 determine inliers (verify hypothesis)
(generate hypothesis)
until Γ(#inliers,#samples)