Accurate vision based road tracker - Thierry CHATEAU

Abstract—Vision based road trackers are about to be integrated in cur- rent vehicles mainly for ... vehicles (such as ABS and GPS navigation systems for exam- ple). Nevertheless each ..... Road projections examples. Left: image of seq.1 at ...
416KB taille 9 téléchargements 301 vues
Accurate vision based road tracker Roland Chapuis, Jean Laneurit, Romuald AufrÁ ere, Fr´ed´eric Chausse, Thierry Chateau LASMEA, UMR 6602 du CNRS, 24 avenue des Landais, 63177 AubiÁ ere Cedex email: [email protected] Abstract— Vision based road trackers are about to be integrated in current vehicles mainly for security purposes in response to sleepiness problems for example. Nevertheless, such systems must be acceptable for drivers and must have a good reliability both in terms of roadsides recognition and of vehicle location estimation. Such a system must therefore be able to run in spite of dif£cult situations (due to occlusions, traf£c, bad weather conditions, etc). Furthermore, the accuracy of the 3D vehicle estimation must be suf£cient in order to feed subsequent warning systems. The system we have designed is able to recognize with reliability the lane sides in the current image and uses a 3D/4D modelling which provides both a good recognition as well as a very accurate 3D parameters estimation (vehicle location, steer angle, road curvatures, etc). The paper focuses mainly on this 3D original estimation stage and presents our recent developments (distance between vehicle and each road side for lane tracking application, analysis distances increasing, vertical road curvature estimation). The algorithm behaviour is then presented in simulated and real situations as well in order to prove the reliability of the approach.

I. Introduction Since 80’s, scienti£c community cope with driving assistance problems. Several systems are now implemented in standard vehicles (such as ABS and GPS navigation systems for example). Nevertheless each time an application needs to be solved by visual perception the problems seem to be much more dif£cult to solve. Indeed obstacles detection for example seems to be easily achieved by a RADAR based system but it is not clear how to distinguish between dangerous obstacles and bridges or road signs for example. Furthermore, a perception system must be weather and traf£c insensitive as well. For theses reasons, we claim there is still a room for robust vision based on-board systems designed to achieve robust detection and tracking of the road boundaries. We have developed such a system which has shown good recognition capabilities in dif£cult situations [19], [14]. This road tracker is able to recognize highway lane sides and to compute several 3D parameters such as lateral vehicle location, steering and pitching angles, lane width and local horizontal road curvature. The algorithm uses an original way to recognize the road sides in the image and also to compute 3D parameters. This paper focuses on the improvements we have done on the 3D estimation stage. Particularly, we have improved the vehicle location estimation and added the vertical road curvature reconstruction. The algorithm is now able to proceed at further distances. We have tested the algorithm capabilities on a known simulated road in order to evaluate numerous cases including horizontal and vertical curvatures, tilt and roll variations, etc. We have also evaluated the vehicle location as well as the lane width estimations in real images by using manual measurements and with several focal lengths of the camera. The paper presents £rst the classical approaches of the literature and describe brie¤y our algorithm. The next part of the paper deals with our new 3D parameters estimation once roadsides have been detected in the image. Simulation and real results are £nally presented in order to validate the concept.

II. Classical approaches In the literature two main approaches are used to solve the vision road recognition problem. These approaches mainly depends on the roadsides model used (3D model or 2D image model). In the £rst case, a 3D roadsides model which can take into account road parameters and vehicle location parameters as well is used to de£ne the roadsides in the image. Roadsides detection is achieved more often in Window Of Interest (WOI). These observations will be used therefore to update the 3D model. Several kinds of 3D models can be used. First, if the roadsides can be assumed as straight and rectilinear, a simple model can be used [11], [9], [17]. Nevertheless this technique generates errors on the vehicle localisation parameters if the road is not ¤at or if the sides are not really straight. This model is mainly used in non-marked roads context since the analysis distances have not to be high. In order to take into account road curvature, higher order models have been developed. For example in [17], the authors present a three parameters model assuming a ¤at road. More complex models have been designed for example by Dickmanns et al. [4], [5]. The model used in this case takes into account both the road geometry and the vehicle egostate. The main drawbacks of these 3D modelling approaches is a weak precision if the model is simple or a strong sensitivity to noise and a dif£cult updating if the model is more complex. More often, a tracking stage uses the updated parameters to help the roadsides detection in the next image. This stage is generally mandatory since the back-projection of the 3D model is more often non linear. This prediction reduces thus linearisation errors. Since most of these methods run in prediction/updating principle by using state evolution, they need an initial parameters estimation. Most of the cases, this stage is achieved arti£cially and a “following loss” could be dif£cult to manage. The second classical approaches are based on the roadsides recognition by using 2D image modelling. In this case too, several models can be found: rectilinear model [11], [9], [15], parabolas [13], [8], polynomial curves [7], [12], splines [20], hyperbola models [12]. Furthermore, several methods use the inverse perspective mapping [2]. Image approaches are more often easier to implement than 3D ones since they don’t need accurate calibration or vehicle parameters knowledge. However the estimation of the vehicle location could be dif£cult since this stage needs a 3D estimation. Furthermore, the knowledge of a roadside location is not really used to help the recognition of the other one. A recognition stage using this consistency is then dif£cult to design. Nevertheless, starting from image roadsides location, several reconstruction technics are able to compute 3D parameters of the road. These methods could be very noise sensitive since they need differential minimisation [3], [10] or approximations [18], [14]. Furthermore, since these methods are more often decorrelated from the recognition scheme, errors due to this stage could

be dif£cult to manage even if the reconstruction scheme is powerful (see for example [1]).

III. Roadsides recognition Our approach has been described in [16], [14]. This part recalls the reader the basic principle of the roadsides recognition. The recognition approach is based on an image model. It provides reliable roadsides recognition in spite of dif£cult conditions thanks to: (1) the original image roadsides modelling we developed, (2) the recognition stage involves the borders detection to be achieved in very small size windows of interest, and (3) the recognition of a roadside helps the recognition of the other one, the consistency of the detections is then preserved in an optimal way.

A. Image roadsides model Our approach is based on a statistical image roadsides modelling. Each roadside is represented by n − 1 (n=10 here) connected segments. Abscissas of extremal points of theses segments are very correlated (ordinates are assumed constants here but will evolve for each image, see §IV-E), even between two segments not belonging to the same side. These abscissas are put together in a vector X d = (u1l , ..., unl , u1r , ..., unr )t .

B. Road recognition B.1 Principle b of X which Our goal is to estimate the optimal value X d d represents the roadsides location in the current image. The recognition uses an initial value X d (0) of X d and its covariance matrix CXd (0). Vector X d (0) represents the mean value of roadsides location in an image and CXd (0) represents the standard deviation of the roadsides location. A training stage is then used to compute X d (0) and CXd (0). This stage is based on a 3D model and is more precisely described in §IV-B. The recognition method is then based on a search tree. The combinatory reduction method is done by pruning as soon as possible the branches of the search tree [6]. This technique doesn’t require to detect features in the whole image as it is the case for accumulation methods. Indeed, in this case, detections are, more often, done in a much more effective way because they are achieved in windows of interest (WOI) with small size and are then less noise sensitive. After the training phase, i.e for an analysis depth p = 0: model (X d (0), CXd (0)), a WOI is chosen in the image according to its “cost”. In this WOI, a segment is detected, the model is then updated to give now (X d (1), CXd (1)). This procedure is iterative: a new optimal WOI is chosen and the algorithm works now at depth p + 1. Roadsides are assumed to be found in the image if a certain criterion is reached. A re£ning stage allows, thereafter, to improve the search result. B.2 Detection This phase consists in detecting the road edges, in the WOI previously de£ned. For each row inside this WOI, the point of maximum gradient is localised. With the whole points thus made up, a line segment is £tted by a median least squares method which takes into account a constraint of the slope deduced from CXd . This detector tries in fact to £nd a segment

according to the need of the current model. If no segment could be obtained in spite of 2n − p attempts (number of remaining interest zones for a given depth p), the algorithm leaves this branch and goes up at previous depth p − 1. B.3 Updating The detection phase gives for a depth of analysis p a meaui (p), u bi+1 (p))t allowing to re£ne the model surement x ˆ(p) = (b (X d (p − 1), CXd (p − 1)) for the current depth p. This phase consists in calculating a new vector X d (p) and a new covariance matrix CXd (p) deduced from the observation x ˆ(p) and preceding state X d (p − 1) and CXd (p − 1) by the following way: ½ X d (p) = X d (p − 1) + Kd [ˆ x(p) − x(p − 1)] CXd (p) = CXd (p − 1) − Kd Hd CXd (p − 1) −1

Kd = CXd (p − 1)Htd [Hd CXd (p − 1)Htd + R] t • x = (ui , ui+1 ) = Hd X d , ˆ= • R: covariance matrix of the detection error w d (such as x (b ui , u bi+1 )t = x + w d ) a priori £xed at 5 pixels (here R = 5 2 I where I (2×2) is the identity matrix). Finally, once the road recognition for image k is achieved, the procedure provides vector X d (k|k) and its covariance matrix CXd (k|k). •

IV. 3D parameters estimation and tracking Once the roadsides are located in the image, it is necessary to compute 3D parameters. Classical approaches are more often decorrelated and a 3D reconstruction is achieved starting from images data. The main drawbacks of these approaches are (1) due to occlusions, several image data could not be used even if they could be easily be deduced from other cues, (2) due to nonlinear equations, stability of the reconstruction could be dif£cult to reach whatever the conditions and (3) the covariance of the image estimations could be dif£cult to take into account though it could be very useful to improve accuracy and stability. The reconstruction process we use is based on a linear estimation. Indeed, though we have chosen to use a nonlinear 3D model taking into account both tilt and steer angles, road width and horizontal and vertical road curvatures, the reconstruction updating is achieved by a simple Kalman £lter using estimated X d (k|k) vector.

A. 3D roadsides model The model we use is expressed as a function of images coordinates (ul , v) for left lane side and (ur , v) for right lane side: ul = fl (v, X l )

ur = fr (v, X l )

and t

• • • • • •

With X l = (X0 , L, ψ, α, Ch , Cv ) , where: X0 : lateral position of the vehicle on the roadway, L: road width. ψ: vehicle steer angle, α: camera inclination angle, Ch : horizontal road curvature, Cv : vertical road curvature, It can be shown indeed that: ³ ´ u ≈ eu

Ch Y X0 − λL − −ψ 2 Y

(1)

With ul = u with λ = −1 and ur = u with λ = 0, eu and ev are intrinsic camera parameters and: r³ ´ Y =

v − ev α Z0 + ev C v Cv

v − ev α ev Z 0

2

+2

Cv Z0

B. Training The training stage provides the mean value and the covariance matrix of X = (X d , X l )t . The issue of this training stage is a mean value of vector X and its covariance matrix CX : ½ X d = (u1l , ..., unl , u1r , ..., unr )t t X = (X d , X l ) with X l = (X0 , L, ψ, α, Ch , Cv )t Starting from eq. (1), we are able to compute uil and uir abscissas for ordinates vi knowing X l parameters: ½ fl if i ∈ [1, n] X = f (vi , X l ) with f = fr if i ∈ [n + 1, 2n] We are also able to compute covariance matrix CX of X knowing CXl and X l values CX = Jf CXl Jtf . Where Jf is the jacobian matrix of function f . CXl is chosen diagonal and each parameter σi2 on its diagonal is the variances of each 3D parameter £xed once and for all. Here, we used σX0 = 1.75m, σL = 0.25m, σψ = 6◦ , σα = 1◦ , σCh = 0.005m−1 and σCv = 0.001m−1 . So, knowing mean parameters X l (0), we can have mean parameters X(0) of X and so those X d (0) of X d . It is the same for covariance matrix C(0) which will provide matrix Cd (0) used for the recognition process.

C. 3D parameters estimation Once an estimation of the roadsides location for a given image k is achieved, vector X d (k|k) is available with its covariance matrix CX d (k|k).In order to compute 3D parameters (vector X l (k|k)) for the given image, we update the vector model X(k|k) in the following way: ½ X(k|k) = X(0) + K[X d (k|k) − HX(0)] CX (k|k) = CX (0) − KHCX (0) • K = CX (0)H

£ t

HCX (0)Ht + CXd (k|k)

¤−1

b = HX + w, H is such as X d X d (k|k): result vector of road detection process, t • CXd (k|k) = E[ww ]: covariance matrix of X d (k|k). Knowing X = (X d , X l )t , we can easily know 3D parameters and especially locate the vehicle on its lane. •



D. Improvement of X0 estimation for lane tracking Nevertheless, even if those 3D parameters are estimated with accuracy (see results below), when applied in a lane tracker system, the accuracy of X0 remains a problem. Indeed, the important parameter in this case is the distance the vehicle is from the nearest lane side. However, X0 parameter is de£ned with respect to right side of the lane. So, depending on the lane width, the distance between vehicle and left side could not be estimated. This is the reason why we compute for each image

the lane width. Nevertheless, if the right side could not be detected with suf£cient number of points, neither X 0 nor L could be estimated with accuracy. In order to solve this problem, we added in our 3D vector X l , the parameter X0l (distance between vehicle and left side). X0 then becomes X0l . We then have now : X l = (X0l , X0r , L, ψ, α, Ch , Cv )t Since L = X0l + X0r , the covariance matrix will be:  2 σX

CXl

   =   

With:

0l

σX0l X0r σX0l L

σX0l X0r 2 σX 0r 0

σX0l L 0 2 σL

0

2 σψ

2 σα

0

(

2 σX 0l σX0l X0r σX0l L

= = =



2 σC

h

2 σC v

      

2 2 σX + σL 0r 2 −σX 0r 2 σL

The 3D estimation stage will then give both X0l and X0r and the associated covariance.

E. Updating the vertical location of the WOI’s We have seen previously that vi ordinates are assumed to be known in order to achieve the training stage (see §IV-B). In previous versions of the algorithm, these ordinates were £xed once and for all. Nevertheless, since the £rst ordinate v 1 must be lower than the vanishing point whatever the conditions, it was not possible to estimate roadsides location at high distances. In this new version, we update vi ordinates according to current vertical curvature in order to increase the analysis distances when it is possible. For this purpose, model (1) is used to compute the ordinate vi corresponding to a distance ∆u (£xed value) between left and right roadsides in the image. So, v1 will be deduced by: µ ¶ eu Cv L Z0 ∆u v1 = e v α + − 2∆u eu L Other ordinates values v1 to vn will be distributed in such a way each the ∆Y 3D distance corresponding to each vi is constant. These ordinates will compose vector V = (v1 , v2 · · · vn )t .

F. Algorithm evolution and 3D tracking We have seen in §IV-C the 3D parameters estimation principle for a given image k. These 3D parameters for image k + 1 can be easily estimated by a Kalman £ltering using vehicle motion. Once this prediction is achieved, it could be interesting to self adapt the search space around these new parameters in order to (1) improve computational times and (2) improve the road recognition quality since WOI’s will have lower size and therefore signal to noise ratio will be better. It is important to notice this stage is not mandatory in our approach. Indeed our algorithm is able to recognize the road on a single image. This possibility is very interesting in case of tracking losses (due to dif£cult situations such as dazzle or important lines occlusion).

So, by taking into account this 3D parameters prediction for image k + 1 it is necessary to update ordinates vector V = (v1 , v2 · · · vn )t (see §IV-E). This vector will be used to achieve a new training stage in order to compute X d (k + 1|k) and CXd (k+1|k) necessary for recognition stage of image k+1. The new algorithm evolution is detailed in £gure 1. X l (0) CX (0) l

ordinates vi V (0) computation

Training X d (0) CX (0) d

roadsides recognition X d (k|k)

CX (k|k) d X d (k|k − 1) CX (k|k − 1) d

3D estimation

k =k+1

X l (k|k) CX (k|k) l 3D evolution

Fig. 3. Typical images of the sequence (left: original image, right: road recognition result)

X l (k + 1|k) CX (k + 1|k) l ordinates vi updating

V (k + 1)

training

10m

450m

Horizontal profile

Vertical profile

400

X d (k + 1|k) CX (k + 1|k)

5

350 300

d

250

0

200 150

Fig. 1. Algorithm evolution

-5

100 50 0 -150

V. Parameters estimation results

-100

-50

0

50

100

150m

-10 0

50

100 150 200 250 300 350 400

450m

Fig. 4. Horizontal and vertical pro£les of the road axis (sequence 1)

A. Algorithm evolution over a whole sequence We have applied our algorithm over a 1850 images sequence. The horizontal camera aperture was 10.8◦ . Only horizontal and vertical curvatures are presented in £gure 2. Figures 3 shows typical images of sequence. The recognition distance depends on v1 values (updated for each image) and is comprised between 60m (image #700) and 150m (image #1800). 1/m 0.001

1/m Horizontal curvature

0.0004

0.0005

Verticale curvature

0.0002

0

0 -0.0002

-0.0005

-0.0004 -0.001 0 200

600

1000

1400

1800

0 200

600

1000

1400

1800

Fig. 2. Horizontal and vertical curvatures estimation

B. Performances on simulated data In order to validate the approach, we have used a road simulation on which a vehicle is running. The main advantages of such a simulator is that it is possible to impose 3D parameters (vehicle location, camera tilt, road curvatures, etc) and see the algorithm behaviour. The recognition stage is assumed to be done properly without error. The £rst experiment is achieved on a road whose axis pro£les (both horizontal and vertical) are shown in £gure 4. An image of the sequence is shown in £gure 5. We have used clothoidal shapes for horizontal and vertical pro£les as well (though our algorithm assumes the curvatures as constant for a given image). We have imposed hard curvatures (250m and 500m for horizontal and vertical radius).

Figure 6 presents the estimations obtained. We have applied a 2◦ tilt perturbation (α mean value is 5◦ ) at distances between 10m and 40m. First, we can see vertical curvatures are not very sensitive to the α variation angle (though at low distances, both these parameters could be very correlated). We can notice, at distances beyond 250m, the algorithm anticipates the curvature value of about 10m. Horizontal curvature is not sensitive to α variation and the algorithm anticipates here too, the real curvature. Third graph shows both X0 (vehicle location) and L (road width) errors. These parameters are assumed to be constants (X0 = 1.75m and L = 3.5m) and estimations errors are shown. In all the cases, L errors are less than 5cm. L estimation is a bit sensitive to α but is not very sensitive to curvatures variations. X0 estimation seems to be very sensitive to horizontal curvature but not to vertical one. In fact, this effect is due to roll angle (θ) which is shown in last graph and chosen as a linear function of Ch . In spite of combined high θ and Ch , X0 errors remains lower than 10cm. Finally, we present α (tilt angle) and ψ (steer angle) estimations. α is slightly underestimated during its variation (due to curvature correlation) and its error is about 0.5◦ . ψ estimation is slightly sensitive to horizontal curvature variation but its value remains very low whatever the conditions (0.5◦ ). Second experiment is achieved on a road whose axis pro£les (both horizontal and vertical) are shown in £gure 7 (see image in £gure 5). It is a much more dif£cult case since curvatures and α varia-

500

500

400

400

300

300

0.002 (1/m)

vertical curvature

0.0015

estimated curvature

0.001

real curvature

0.0005 0

200

200

-0.0005 -0.001

100

100

-0.0015 0

0

100

200

300

400

0

500

0

100

200

300

400

-0.002 0.01 (1/m)

500

Fig. 5. Road projections examples. Left: image of seq.1 at distance 200m, right: image of seq.2 at distance 250m

50

100

150

200

250

300

350

400

450

350

400

450

horizontal curvature

0.005 0

estimated curvature

-0.005

real curvature

-0.01 0.002 (1/m)

50

vertical curvature

0.0015

0.1

estimated curvature

0.001

100

150

200

0.15(m)

250

300

X0 and L errors

error on L (road width)

0.05

0.0005 0

0

-0.0005

-0.05

real curvature

-0.001

error on X0 (lateral vehicle location)

-0.1

-0.0015 -0.15

-0.002 0.01 (1/m)

50

100

150

200

250

300

350

400

100

150

7.5 7

estimated curvature

real α

6.5

real curvature

250

300

350

400

450

300

350

400

450

α (tilt) angle

α estimation

6 5.5

0 -0.005

200

8 (deg.)

horizontal curvature

0.005

50

450

5 4.5 4

-0.01 50

100

150

0.15(m)

200

250

300

350

400

450

X0 and L errors

0.1

100

150

200

2

θ angle

1.5

0

ψ estimation

1

-0.05

0.5

error on X0 (lateral vehicle location)

-0.1

250

ψ angle estimation and θ (roll) angle

2.5

error on L (road width)

0.05

50

3 (deg.)

0 -0.5

-0.15 50

8 (deg.)

100

7.5 7

150

200

300

350

400

450

100

150

200

250 distance (m)

300

350

400

450

Fig. 8. Simulation results on sequence 2

α estimation

6 5.5 5

50

α (tilt) angle

real α

6.5

250

4.5 4 50

100

150

200

3 (deg.)

250

300

350

400

450

ψ angle estimation and θ (roll) angle

2.5 2

θ angle

1.5

ψ estimation

1 0.5 0 -0.5 50

100

150

200

250 distance (m)

300

350

400

450

location and lane width. Results concerning only 3 focal lengths will be presented here. Figure 9 shows for example 3 images for the 9th pause with corresponding apertures φ.

Fig. 6. Simulation results on sequence 1

tions are combined together. This sequence shows the behaviour of the 3D parameters estimation process when applied in complex situations. We can notice that in spite of high values of curvatures and correlation between parameters, the estimations remains acceptable in the whole sequence (and in particular X0 and L estimations, which are important parameters for which accuracy is more often needed).

C. Performances on real data We have evaluated the accuracy of our algorithm by using real images. It is rather dif£cult (and dangerous) to measure the real vehicle location or road width on highways. We have thus used a section of a highway forbiden to traf£c (with Co£route and PSA agreement). However it has not been possible to measure other parameters (such as steer angle, road curvatures, camera tilt, etc). Images come from a numerical FireWire camera giving 640×480 pixels images size with remote control (especially the focal length for our use). We have stopped our experimental vehicle VELAC (see [14], [15]) 29 times. During each pause and for 10 different focal lengths, the algorithm gave the vehicle 10m

450m 400 350 300

Vertical profile

Horizontal profile 5

250 200 150

0

100

-5

50 0 -150

-10 -100

-50

0

50

100

150m

0

50

100 150 200 250 300 350 400

Fig. 7. Horizontal and vertical pro£les of the road axis (seq.2)

450m

φ = 23.6◦

φ = 20.7◦

φ = 15.5◦

Fig. 9. Typical images form 9th position with different apertures φ

Figure 10 shows results obtained for a camera aperture φ = 23.6◦ . On £rst graph, we can see errors obtained both on X 0l and X0r (see IV-D). Important errors are reached in particular cases. Image #5 shows the case where left side is not visible easily. X0l is thus not very accurate in this case due too the lack of detection near the vehicle. Another similar case is shown at image 18 for the other side. However, in classical cases, errors remain lower than 5cm. We computed too the best estimation between X0l and X0r by taking into account the lower variance obtained from the 3D estimation process. This estimation is given in £rst graph too (£g.10). We can see here that whatever the side used, the error is very acceptable. On next graph, we show errors obtained on the road width estimation. Here too, image #18 involves high errors on this parameter. But in all the other cases, the error is lower than 5cm. Figure 11 shows results obtained for a φ = 20.7◦ camera aperture. The same vehicle positions are used and dif£cult images (position #22 is very dif£cult, see £g. 12) involve high errors both on X0l and X0r parameters. Nevertheless L errors are acceptable during the whole sequence.

0.3m 0.2

X0l and X0r errors (zoom5) best choices

0.1 0 -0.1 X0l error -0.2 0

5

X0r error 20

25

30

image #5

15 20 image number

25

30

image #18

10

0.1m

15 L errors

0.05 0 -0.05 -0.1 -0.15 -0.2

0

5

10

Fig. 10. Results obtained for zoom 5 (φ = 23.6◦ )

rithm to take into account other road scene actors (road signs, vehicles, etc) in an uni£ed system. Indeed several classical approaches are able to recognize separately these objects but our goal will be here to combine their recognition. For this purpose, this algorithm has been implemented in our experimental vehicle VELAC and we are about to use simultaneously several sensors (RADAR, gyrometer, etc) in the whole recognition process in order to detect obstacles and thus to improve the road recognition.

References [1]

0.5 0.4 0.3 0.2 0.1 0 -0.1 -0.2 -0.3 -0.4 -0.5 0 0.08 0.06 0.04 0.02 0 -0.02 -0.04 -0.06 -0.08 0

X0l and X0r errors (zoom6)

X0r error

[2] best choices

X0l error

[3] 5

10

15

20

25

30

image #9

L errors

[4]

5

10

15

20

25

30

image #16

[5]

image number

[6] Fig. 11. Results obtained for zoom 6 (φ = 20.7◦ ) [7]

Figure 12 shows results obtained for a camera aperture φ = 15.5◦ . Image #22 involves of course important errors.

[8] [9]

0.2m 0.15 0.1 0.05 0 -0.05 -0.1 -0.15 -0.2 0 0.2m 0.15 0.1 0.05 0 -0.05 -0.1 -0.15 -0.2 0

X0l and X0r errors (zoom8)

[10]

X0l error

best choices

[11] X0r error

5

10

15

20

25

30

image #22

[12]

L errors

[13] 5

10

15 20 image number

25

30

image #24

Fig. 12. Results obtained for zoom 8 (φ = 15.5◦ )

VI. Conclusion and future works We describe in this paper an algorithm able to compute 3D parameters of a road scene using images provided by an on-board monocular monochromic camera. Our algorithm has been designed to recognize with reliability the lane sides of a marked highway and to compute with reliability road curvatures and vehicle location as well. We have evaluated the ability of the algorithm to estimate the vehicle location with respect to each lane side, the lane width, the tilt and steer angles and horizonal and vertical curvatures. The algorithm uses a non-linear model taking into account each previous parameters. The updating of this model is original since it is achieved in a linear way. Results are presented both in simulation and real cases and show the reliability of the approach. In particular, we show the estimation of the road width and the vehicle location errors are lower than 5cm in standard situations. Future work will be focused on the possibility for the algo-

[14] [15] [16] [17] [18]

[19] [20]

Guiducci A. Parametric model of the perspective projection of a road with applications to lane keeping and 3d road reconstruction. Computer Vision and Image Understanding, 73(3):414–427, 1999. M. Bertozzi and A. Broggi. Gold: a parallel real-time stereo vision system for generic obstacle and lane detection. In IEEE Transactions on Image Processing, pages 62–81, January 1998. DeMenthon D. and Davis L.S. Reconstruction of a road by local image matches and global 3d optimization. In IEEE International Conference on Robotics and Automation, pages 1337–1342, Cincinnati, May 1990. E.D. Dickmanns, R. Behringer, C. Brudigam, D. Dickmanns, F. Thomanek, and V. von Holt. An all-transputer visual autobahnautopilot/copilot. In ICCV93, pages 608–615, 1993. E.D. Dickmanns and B.D. Mysliwetz. Recursive 3-d road and relative ego-state recognition. PAMI, 14(2):199–213, February 1992. W.E.L. Grimson and T. Lozano-Perez. Model-based recognition and localization from sparse range or tactile data. In RCV87, pages 382–414, 1987. F. Guichard and J.P. Tarel. Curve £nder combining perceptual grouping and a kalman like £tting. In ICCV99, pages 1003–1008, 1999. Schneiderman H. and Nashman M. A discriminating feature based tracker for vision-based autonomous driving. IEEE trans. on Robotics and Automation, 10(6), December 1994. Tarel J.P., Guichard F., and Aubert D. Tracking occluded lane-markings for lateral vehicle guidance. In IMACS / IEEE CSCC99, 1999. K. Kanatani and K. Watanabe. Reconstruction of 3d road geometry from images for autonomous land vehicles. IEEE Trans. On Robotics and Automation, 6:127–132, Feb. 1990. Chen K.H. and Tsai W.H. Vision-based autonomous land vehicle guidance in outdoor road environments using combined line and road following techniques. Journal of Robotics Systems, 14(10):711–728, 1997. C. Kreucher and S. Lakshmanan. A frequency domain approach to lane detection in roadway images. In International Conference on Image Processing, pages 31–35, KOBE, October 24–28 1999. Herman M., Nashman M., Tsai-Hong H., Shneiderman H., Coombs D., Gin-Shu Y., Raviv D., and Wavering A. Visual Navigation: From Biological Systems to Unmanned Ground Vehicles, chapter Minimalist Vision for Navigation. Lawrence Erlbaum Associates: Mahwah, 1997. AufrÁ ere R. Reconnaissance et suivi de route par vision arti£cielle, application aÁ l’aide aÁ la conduite. PhD thesis, Universit´e Blaise Pascal ClermontFerrand (France), Juin 2001. AufrÁ ere R., Chapuis R., and Chausse F. A dynamic vision algorithm to locate a vehicle on a non-structured road. International Journal of Robotics Research, 19(5):411–423, May 2000. AufrÁ ere R., Chapuis R., and Chausse F. A fast and robust vision based road following algorithm. In IV’2000 (IEEE Int. Conf. on Intelligent Vehicles), pages 192–197, DearBorn (MI USA), October 4–6 2000. Chapuis R., Potelle A., Brame J.L., and Chausse F. Real time vehicle trajectory supervision on the highway. International Journal of Robotics Research, 14(6):531–542, Dec. 1995. Chapuis R., AufrÁ ere R., and Chausse F. Recovering a 3d shape of a road by vision. In 7th International Conference on Image Processing and its Applications-IPA99, volume II, pages 686–690, Manchester (U.K.), July 12-15 1999. Chapuis R., AufrÁ ere R., Chausse F., and Alizon J. A fast and robust vision based road following algorithm. In IV’2001 (IEEE Int. Conf. on Intelligent Vehicles), pages 13–18, Tokyo (Japan), May 14–17 2001. Y. Wang, D. Shen, and E.K. Teoh. Lane detection using catmullrom spline. In Intelligent vehicles symposium, volume 1, pages 51–57, Stuttgart, Germany, 28–30 October 1998.