building environment maps using neural networks - Page Web

processing and display of the received echo are performed on the PC (Fig 2). The sensor .... important to the BP learning process. The estimate of the size of the ...
124KB taille 3 téléchargements 301 vues
BUILDING ENVIRONMENT MAPS USING NEURAL NETWORKS N Achour*, R.Toumi* and N.K M’sirdi** Laboratoire de Robotique Parallèlisme et Energétique (LRPE), faculté d’Electronique et d’Informatique, USTHB BP 32, Bab Ezzouar, El Alia, 16000, Algiers Algeria. [email protected] ** Laboratoire de robotique de Versailles (LRV). 10-12 Avenue de l’Europe 78140 Velizy, France. [email protected]

*

Abstract In this paper, we present an efficient and cheap approach to build a sonar based mapping for autonomous mobile robot in indoor environments. The system uses one ultrasonic sensor (Emitter and receiver are separated) mounted on a motorization constituted by two motors allowing horizontal and vertical scanning. The reading is modelled as probability profiles projected on a two-dimensional map. These readings provide information’s concerning empty and occupied areas in the sensor cone. The latter is the geometric representation of the sensor beam, it is composed of data allowing recognition of the environment structure. The world is modelled as a grid of cells. Two classifying neural nets are used to build the environment map, the first one for robot navigation in order to scan the workspace to obtain a data base which is used by a second one to build the map. The result is a faithful representation of the real environment when the inputs are cones of data. The map can be used to plan paths and for navigation. Results from actual runs are presented at the end. Keywords : Sonar map, ultrasonic sensor, mobile robot, neural networks. 1. Introduction This paper presents an approach to deal with the mapping problem for mobile robot path planning and navigation. The latter must have information’s about its surroundings. So we have to elaborate, as possible, an accurate model of its environment. For environment sensing, we used an ultrasonic sensor. This type of sensors provides a cheap and reliable means for detection. Although their shortcomings as their poor directionality [1] and their frequent misreading, the using of ultrasonic transducers have given many interesting results not only in target localisation but also in navigation [2][3]. The sonar system is on board the mobile robot, it is based on a single ultrasonic sensor mounted on a rod which can move from right to left and from top to bottom and vice-versa. Using that system, we can obtain a panoramic scan of the environment from a given position and sets of range measurements which are processed and progressively incorporated into a two-dimensional sonar map. The system is well adapted to environments composed of specular surfaces such as smooth walls, bookcases, tables and chairs. A surface is specular if its roughness is much smaller than the incident wavelength [4][5]. All processing and display of the received data are performed on a PC via an interface. Our goal is to plan sets of readings to build maps of the robot's environment. The reading is a probability that the area is empty or occupied. Using probabilities allow to handle the problem of uncertainties and errors of data, the idea is derived from Elfes [6][7]. The probabilities are projected onto a two-dimensional grid of cells [8], which represent the floor space. The map building is done using a neural algorithm proposed by neural nets . In this way, two neural nets are used, the first one for robot navigation to scan the workspace where a data base is collected which is used by a second classifying network to build the map. We have improved the emptiness and occupancy certainty values with using an hysteresis comparator. The remainder of this article is organised as follows. Section 2 gives a short outline of the approach. Section 3 and 4 describe respectively the sonar system and the sensor pattern. Section 5 relates the sonar mapping method. Section 6 shows some results from actual runs and discuss limitations and advantages of our approach. 2. Approach Our method starts with range measurements obtained with the following strategy : As the robot moves in the environment according to decisions given by NN1, the neural network for robot navigation, range measurements which consist on fans of cones ci i =1…n are obtained from a mechanism based on a single ultrasonic sensor attached to a rod mounted on a motorization which allow horizontal and vertical sweepings. Each sonar reading is interpreted as a probability and provides an information about probably empty and occupied regions within the sensor cone with an aperture angle of 20°. This is done by a second neural network NN2. These information’s are projected onto a two-dimensional rasterised map. The latter is progressively updated during the scanning mode. The final map gives an accuracy model of the environment.

3. The sonar system It consists of an ultrasonic range finder, vertical and horizontal sweeping controls based on two stepper motors, an interface with a PC and also a power amplifier for an important range (8 meters). The emitter circuit generates a signal able to excite the emitter transducer [9][10]. The receiving circuit uses an inverse piezo-electric effect. The received echo is a validation signal which allows the data transfer to the PC (Fig 1). 3-1 The range finder It is composed of two circuits : Emitter and receiver circuits. Using two piezo-electric probes with the same resonance (40 kHz), its action is based on the time measurement witch runs between reception and transmission ( time- of- flight TOF). The reflected echo are transformed to electric signals then converted in distances. The emitter circuit consists principally of a power amplifier in order to obtain an important range. All what’s about treatments concern the receiver circuit.

PC parallel port

Command

M

Circuit motors

Emetter / Receiver

sensor

Figure 2 : Experimental setup for sensor The sensor is interfaced to a 66 MHz 486 PC via the parallel port. The PC controls the firing of the transmitter, the on/off and the rotation direction of the motors. All processing and display of the received echo are performed on the PC (Fig 2). The sensor arrangement is very cheap in terms of hardware comparatively with other published systems [2][3][11] . 3-2 The distance measurement First incoming data below Rmin ( ≈ 9.45 mm, it is one of the fundamental drawbacks of ultrasonic sensors ) and above Rmax (the maximum sensor range measured) are rejected. Distances are measured by the TOF method, it was used before by Peremans [11] to classify planes and edges . Corners and planes have been distinguished using the sensor movement. The emission-reception period is fixed to 100ms. And the emission duration is fixed to 1.5 ms. In order that the counter rocks every centimetre, the clock period must be 58.3 µs witch is given by :

T=

2x x = 1 cm c

(1)

Next, we determine the distance using the number N of pulses given by the counter (from the emission start to the first recieved echo). TOF = T × N then

d =

c × TOF 2

provided that the speed of sound is c = 343 m/s (at 20°C) [12]. 3-3 Environment scanning

(2) (3)

The exploration of the environment is realised by the robot itself. The latter moves on almost all sides of the workspace avoiding the obstacles by sweeping the ultrasonic sensor which is placed at one meter level from right to left and from top to bottom and vice-versa. The sweeping movement is realised by two stepper motors. Readings are recorded when the angle between the acoustic axis and the horizontal is 0°, 90° and +180° of the motor M2 for a given position of the motor M1. The process is done twice at each position of the robot. The readings are sometimes perceptibly different but the difference is most of the time less than 5 cm. Frequent misreading are caused by specular reflections when the angle between the wavefront and the normal to a smooth surface is too large [1]. Then the obstacle can be not detected or smaller than it is in reality. That’s why the different sonar readings obtained at the same position are averaged. This allows to enhance the occupancy and emptiness certainties and to dissipate the fuzziness of the explored regions. We have also considered a threshold beyond which two points next to each other belong to two different objects. This permits to identify for the best the free and the occupied regions. We say “for the best”, we will say why subsequently. We can observe that the sonar sensor has an approximate total range of view of 20° to 25°. During the sweeping, the reflected echo are displayed as points on the PC’s screen. One obstacle is then represented as a set of points.[Fig..] 4. The sonar pattern The ultrasonic sensor has a conical field of view [1]. This cone is characterised by its aperture angle Ω. While one object among the nearest ones to the sensor is detected, the main information is the radial measure of the distance.

In a two-dimensional projection of the conical field of view of an ultrasonic sensor (Fig 4), the aperture angle becomes ω of about 30° with increasing energy content towards the acoustic axis [4]. Thus it is easier to detect the obstacles which are much closer to the acoustic axis. The wide angle of their radiation lobes is responsible for the large uncertainties affecting the obstacles locations [3]. Because of this and other shortcomings of ultrasonic sensors [4][5][13][14], we have considered a probabilistic approach to the interpretation of range data. The probabilistic density functions measure our confidence concerning a point inside the cone beam being empty and our uncertainty about the origin of the echo. These functions take in account the sensor characteristics such as the conical form of the beam and its sensitivity [15][16]. If we consider a point P (x, y, z) inside the sensor cone (Fig 4), it is characterised by its polar coordinates δ and θ: R : range measurement returned by the sonar sensor. ε : mean sonar deviation error. ω : sensor beamwidth. S (xs, ys, zs) : position of the sonar sensor. δ : distance from P to S. θ : angle between the main axis of the beam and P as seen from S. (it’s the angular beam width in the far zone (Franhöfer)).

The two variables δ et θ are independent because the two events E ∆ = { δ ∈ ∆ } and E Θ = { θ ∈ Θ } are independent [17]. The conical representation may say that there is two regions : -

Probably empty regions, where:

δ < R - ε and - ω/2≤ θ ≤ ω/2 , characterised by the probability PE = fE (δ, θ) to be empty. Applying the variable separation , we can write: PE ( x, y ) = PE (δ ).PE (θ )

(4)

Where 2

⎛ (δ − R min ) ⎞ ⎟⎟ for δ ∈ [Rmin , R − ε ] PE (δ ) = 1 − ⎜⎜ ⎝ ( R − ε − R min ) ⎠

(5)

and PE (δ ) = 0 otherwise. 2

-

⎛ 2θ ⎞ PE (θ ) = 1 − ⎜ ⎟ for θ ∈ [−ω / 2, + ω / 2) ⎝ω ⎠ Probably occupied regions:

(6)

It includes points on the sonar beam front δ ∈ [R − ε , R + ε ] and −ω / 2 ≤ θ ≤ +ω / 2 The occupancy probability PO = f O (δ , θ ) [15] [7] . PO ( x, y ) = P( position ( x, y ) is occupied). PO ( x, y ) can be written as the product of two independent functions: PO ( x, y ) = PO (δ ).PO (θ )

(7)

⎛ (δ − R ) ⎞ PO (δ ) = 1 − ⎜ ⎟ for δ ∈ [R − ε , R + ε ] ⎝ ε ⎠

(8)

Where 2

and PO (δ ) = 0 otherwise. 2

⎛ 2θ ⎞ (9) P O (θ ) = 1 − ⎜ ⎟ for θ ∈ [−ω / 2, + ω / 2] ⎝ w⎠ These probability profiles are projected onto a two-dimensional horizontal grid of cells imposed on the area to be mapped. This grid is updated at each scan.

5- The map building The robot scans the workspace and always updates the current map while it is exploring the environment. Two classifying feed-forward neural networks are used to address these issues. The first one NN1 is assigned to navigation while the map building is mainly achieved by the second one NN2. As the robot moves it scans the environment and updates the map in the same time. The start and goal positions are not imposed. In this case, the robot can start and stop at any position. The final point depends on whether the map is the most detailed one we can obtain. To move in a given environment, the robot operates according to the following strategy. From a starting position, it moves straight forward until its sensory system

returns an echo from an obstacle. When the robot attains a fixed safety distance allowing it to successfully detour the obstacle, it turns 90° to the left and moves on. The robot can move along three directions, that is ahead, right and left. The scanning process provides a fan of cones which contain occupancy information of the swept areas. The sensor cone is represented by a vector of data. Vectors corresponding to the same spatial situation must be similar whereas distinct situations lead to different input vectors. Because of the limitations imposed to the robot we keep only with the cones which the angle between the acoustic axis and the horizontal is 0°, +90° or 180°. The map building and the navigation task are done recursively until a most detailed map is obtained.

1

PEr (δ)

POr (δ)

Rmin

R

δ

y P δ

C Rmin

ω/2

ε

θ R

x

Figure 4 : The probability profiles represent the probably empty and occupied regions in the sonar beam

Notice that the building process of NN1 and NN2’s architecture is the same, in the following we explain the process of building the architecture of only the network NN1 and we give in short some explanations of NN2. 5-1 The neural network NN1 for navigation The NN1’s inputs correspond to sensor readings which are represented as occupancy and emptiness probabilities of a given area. Whether the local region is occupied or empty is given by the outputs which interpret the input data. Since the ultimate objective is to achieve an acceptable rate of good classification, we have adopted a final architecture of NN1 when the variable parameters are judged to be optimal. This is done off-line after several tests. We have started with a large neural network and then we have eliminated synaptic weights and hidden neurones until we have obtained a minimum size while maintaining good performances. In this way, we have started with 70 input neurones that constitute all the cells contents belonging to the cone interior (Fig..). We have observed that with only four data situated in the cone arc, we can declare that a region is occupied or empty. The final architecture is composed then with twelve input neurones, four for each of the three directions. The number of hidden layers and the number of neurones nh composing them are determined experimentally taking in account the training results notice that the number of iterations is proportional to 1/nh. Our final network is obtained after we have pruned the one with 12 hidden neurones, we have thus eliminated certain synaptic weights using the complexity penalty defined in [18] that allows to determine which wij are important to the BP learning process. The estimate of the size of the training set is done strictly by experience. No theoretical formulas give good results particularly those used in [19], thus a big gap can be noticed between the size actually needed and those predicted by the formulas.

object

° ° ° ° ° °° ° ° ° ° ° ° * * * * * °° * * * * * * * * *

*

* * * * * * * * * * * * *

* * * * * * * * *

* * * * * * * * *

* * * * * * * * *

* * * * * * *

* * * * *

sensor

Fig 8 : The status of the cells within the sensor cone can be determined only with the contain of the four grey cells.

Vector 1 ( Ahead ) Vector 2 ( Left )

Vector 3 ( Right )

Output layer

Input layer

Hidden layer

Figure 5: The network for navigation

For our part, we have presented to the network a set of examples Ck with k = 1,..8 which are presented in a randomised order from one epoch to the next. Each pattern is of the form (sA, sR, sL, oA, oR, oL) where (sA, sR, sL) are the twelve probability values emanating from the ahead, right and left sensor cones and (oA, oR, oL) are the components of a binary vector O (desired output) with a single non-zero component. Their interpretation are the following : oA= 1, move one step ahead ; oL= 1, turn to the left and one step ahead ; oR= 1, turn to the right and one step ahead. The desired input/output sets are given on Table 1 Table 1 Training Examples for NN1 Inputs

outputs

Vector 1(A) Vector 2(L) Vector 3(R) A L

R

0 0 0 0 P P P P

0 1 0 1 0 1 0 1

0 0 0 0 P P P P

0 0 0 0 P P P P

0 0 0 0 P P P P

0 0 P P 0 0 P P

0 0 P P 0 0 P P

0 0 P P 0 0 P P

0 0 P P 0 0 P P

0 P 0 P 0 P 0 P

A: Ahead; L: Left; R: Right; P:positif

0 P 0 P 0 P 0 P

0 P 0 P 0 P 0 P

0 P 0 P 0 P 0 P

0 0 0 0 1 1 1 1

0 0 1 1 0 0 1 1

e.g : if the four cells of the cone arc contain positive values when the orientation sensor/robot is 0° then the output must be 1 ahead and 0 elsewhere. That means there is an occupied region ahead and nothing elsewhere. All the inputs are set to zero if there is no reflected echo (the way is free). Since we consider only the vector of probabilities situated on the cone arc, we dispose then of two sorts of values: - null inputs ei=0 i∈[1 12] if there is no obstacle - positive inputs ei >0 i∈[1 12] if there is probably an obstacle. We can see on Table 2 the following scenarios : Scenario 2, 3 and 5 The outputs are respectively (0, 0, 1), (0, 1, 0) and (1, 0, 0). In each case two possibilities of direction occur. For example (0, 0, 1) means that the ways on left and ahead are free while on right, the way is obstructed with an obstacle. We had to favour one direction in respect with the other. Table 3 shows the favoured directions (bold arrow). Scenario 4, 6 and 7 In these cases , there is no contention , the displacement is towards the free way. Scenario 1 The output is (0, 0, 0). It means that there is no reflected echo in the three directions. We have favoured the ahead direction (Table 3). Scenario 8 That is a special case which is not treated in the present paper. Table 2 Spatial Situations Corresponding to the Output Vector A

Outputs Tij L R

T1j

T2j

T3j

0

0

0

0

0

1

0

1

0

0

1

1

1

0

0

1

0

1

1

1

0

1

1

1

Learning We trained the network NN1 by using (10) and (11) which is an improved version of the basic BP algorithm [20].

∆w ji (t + 1) = η δ j (t ) y i (t ) + λ ∆ ji (t )

w ji (t + 1) = w ji (t ) + ∆w ji (t + 1)

(10) (11)

It consists of a time series with index t. λ is the momentum constant fixed to 0.8, it allows the control of adjustment ∆wji applied to the synaptic weight connecting neurone i to neurone j. δj and yj are respectively the local gradient and the input signal of neurone j , its expression depends on whether neurone j is an output or a hidden node. In order to speed up the rate of learning preventing in the same time the learning process to terminate in a local minimum of the error surface, we have adopted an adaptive learning rate η. An other advantage of our approach is that the learning at the hidden layer and the output layer is done at almost the same rate. In this way the hidden layer rate parameter ηΗ is always smaller than ηΟ the output layer rate parameter because the local gradients δ(H) are larger than the one’s of the output layer δO. We have adopted 1 the proposition suggested in [21] that is for a given neurone, η should be proportional to , I being the number I K of inputs of each neurone, thus using the formula η = with K = 2 we obtain ηΗ = 0.166 and ηΟ = 0.50 . I Learning algorithm After initialisation of the synaptic weights in a randomised manner, the process of learning which is done in a stochastic mode proceeds as follows : Step 1 : Compute the instantaneous value of the error energy of all the neurones of the output layer, that is E( n ) =

1 3 1 3 2 2 ∑ e j( n ) = ∑ ( y j − d j ) 2 j=1 2 j=1

(12)

if E(n) < E0 repeat the process with an other pattern, otherwise go to step 2. Step 2 : adjust the synaptic weights of the network in layer l according to (10) and (11). 1 8 Step 3 : if the cost function, the global error E G = ∑ E( n ) < E G 0 then the learning process is finished. 8 n =1 E0 and EG0 are the adequate gradient thresholds allowing to stop the learning process. Steps 1 and 2 are done for each training example (xi(n), yi(n)). Steps 1, 2 and 3 are done for each epoch . The generalisation ability of NN1 is verified using test patterns drawn from the same data used to generate the training data. Readings are recorded in the three directions (ahead, right and left) and from several different positions of the robot in the environment and this did not alter its obstacle avoidance aptitude.

5-2 The map building network NN2 As for NN1 the final architecture of NN2 is fixed when the performances are judged to be good, thus the input layer is composed of one neurone, a single hidden layer contains nine neurones and six neurones compose the output layer (Fig 6). The data are real values, they represent the occupied and empty probabilities. We use the conventions : [-1,0[ empty ]0,1] occupied 0 unknown The two first intervals are subdivided into three classes : weakly, fairly and highly occupied (respectively empty).

One output neurone among the three must be set to 1 and the two others to 0 when the input corresponds to a data belonging to a specified class (Table 2).

Input layer

Hidden layer

Output layer

Fig 6: The map building network

Table 3 Training Examples for NN2 Input Outputs -1.0 ≤ i < -0.6 -0.6 ≤ i < -0.3 -0.3 ≤ i < 0.0 0.0 < i ≤ 0.3 0.3 < i ≤ 0.6 0.6 < i ≤ 1.0

0 0 0 0 0 1

0 0 0 0 1 0

0 0 0 1 0 0

0 0 1 0 0 0

0 1 0 0 0 0

1 0 0 0 0 0

A set of six classes of data are presented pattern by pattern in a randomised order. The outline of the learning algorithm is the same as the one used for NN1. For NN2 ηΗ = 0.22 and ηΟ = 0.33, the momentum λ = 0.8. Finally to verify whether the generalisation performance is adequate a set of 39 patterns other than shown to NN2 are presented to it in a randomised manner. These patterns emanate particularly from the classes boundaries 6. The grid updating The mobile robot scans the workspace and updates the map while it is moving in the environment.. Next, the probability functions increase the contents of cells belonging to the boundaries and the cone interior. The cells are updated using the formula [22] :

p (o / M ) =

p( M / o). pO p( M / o). pO + p ( M / E ). p E

(13)

po and pv are the cells contents before updating. p(o/M) : probability of being occupied while the M measurement is available. p(M/o) : probability of existing measurement while the cell is occupied. This latter is obtained using the sensor model. To calculate p(M/o), we use the formulate. p(M/v) = 1- p(M/o). In theory according to the chosen conventions, when the cell content is negative the cell must be considered as probably empty. In practise, because of misreading, it can occurs that the content becomes suddenly positive. Therefore using an hysteresis comparator the states transitions occupied → empty and empty → occupied are achieved in a smooth way thus a cell remains empty while its certainty value PE < TO, otherwise it becomes occupied and remains in this state until PE > TE (Fig 12). TO and TE are proper thresholds of occupancy and emptiness respectively, they are determined by experience, thus occupancy certainty increases beyond a probability value of 0.25 and decreases strongly below 0.1 corresponding to emptiness probability. It is an interesting way to discard several state changes dues to bad reflections.

Occupied

Empty

TE = 0.1

TO= 0.25

Figure 12. Transitions occupied → empty and empty → occupied of a cell content 7. Map representation The floor space is modelled as a grid of H * H cells. The cells have a square form ∆H * ∆V (∆H=∆V). Each cell contains a value in the range [-1, 1], that indicates the sensor’s measurement as a numerical certainty value. The latter is an information that there is probably something somewhere in the field of view and probably not everywhere [15]. A cell can be considered unknown if no information concerning it is available. Before any measurements are made, the grid is initialised to 0.5. It represents an average occupancy certainty. Once an incoming data arrives on the PC parallel port, we can visualise on the screen at each time the 2D and 3D representations of the map and the grid cells contents. 8.

Examples of runs

The proposed approach has been tested on several environments. In the following we present two kinds of results. Fig 1 and Fig 2 give the two examples of runs, (a) shows a rectangular room with dividing walls and (e) shows a rectangular room composed of dividing walls and many objects with different shapes. Each environment is mapped by a two-dimensional grid of (75 × 45) square cells 0.65 ft on side. The trajectory generated by the robot is represented by a solid line corresponding to one full scan on (a) starting at (10.5, 10.5) and to four full scans on (b) starting respectively at (10.5, 10.5), (25, 44), (72, 4) and (53, 4) (1 unit = 1 cell). On (b) and (g) we can observe the two final maps, symbols are used to represent the occupied and the emptiness. We use three symbols for each type, thus according to the emptiness probability value : - High probability is represented by . - fair probability by . - low probability by . and according to the occupancy probability value: - High probability is represented by . - fair probability by . - low probability is represented by . White areas are assumed to be unknown. The 3D view of each map are given on (c) and (h), occupancy is modelled by picks, emptiness by valleys and unknown by flat surfaces. Matrices of probability corresponding to a part of each environment are represented respectively on (d) and (i). The maps are computationally depending of the environment complexity. Discussion We can observe that for the first example on (b), the obtained map is faithful to the real environment because objects are parallel surfaces and most of the sound energy is reflected perpendicular to the surface and detected by the ultrasonic sensor, only a small amount of energy is wasted in other directions. The regions surely occupied (represented by symbol ) are represented by occupied probability PO close to one and low empty probability PE while empty areas are represented by low PO and PE. We can observe that we have obtained shades of emptiness because the cell’s contain PE depends on its own position within the sensor cone. The cells which are far from the walls are certainly empty whereas it’s not obvious to conclude whether the ones close to the walls are empty. Certainty increases with the number of scans. The final map ( (b) ) is highly explicit because PO and PE are close to one. Only a small number of cells have a contain close to zero (big fuzziness) or null (unknown). By combining the evidence from many measurements and using the sensor movements, we can distinguish well the room corners.

The second example is a cluttered environment, it is composed of dividing walls and many objects of different shapes . The robot starts scanning at the position (10.5, 10.5) and moves in the environment according to the three allowed directions. (f) shows the state of the map when the environment is completely visited. We can observe that the perpendicular wall in the middle of the room is correctly detected while just a small of information is obtained from the surfaces which are tilted relative to the acoustic axis of the sensor and then only a small amount of energy is detected. To improve the map density the scanning process is activated as many again as needed, starting each time at a different position chosen suitably by the operator so that the robot can scan more regions than the precedent time. On (e) we can see the complete trajectory of the robot during four full scans starting respectively at the following positions (10.5, 10.5), (25, 44), (72, 4) and (53, 4). The scanning process is stopped at the location (71, 16). Notice that a scan is full when the robot’s trajectory is the same as the precedent one. The obtained map (g) is denser than the one on (f). It is obvious that taking in account information’s given by only the three cones is in part responsible of the quality of the final map. In the same time considering more cones will increase significantly the computational task. Using neural networks have shown proof of their ability to generalise well by making continuous the mapping input – output and to cope with uncertainties and errors in the data. 9. Conclusion We have described an interesting method for structured environment representations and also a cheap system in terms of hardware based on a single ultrasonic sensor allowing an horizontal and vertical scanning. Environments such as walls, doors, rectangular objects are well represented. By increasing the number of robot’s start positions polygonal and circular objects can be well represented. With integrating information coming from horizontal and vertical readings we can extract the geometric boundaries of an object. Corners and planes are distinguished using the sensor movements. By combining many readings we can make precise assertions about probably occupied and empty regions. Using an hysteresis comparator, we can obtain clearly the transitions occupied-empty and empty-occupied of a grid cell content. The using of probability profiles in representing the sonar beam permits to build moderately high resolution spatial maps of a mobile robot's surrounding. Considering the sensor cone as a vector of data is very interesting, hence that allows to consider same vectors as similar spatial situation whereas distinct situations lead to different vectors. The using of neural networks have shown that they are a very interesting tool in map building. First they made good classification of the sensor data taking in account the uncertainties and second they showed their ability to learn local features characterising regions of the environment then their ability to generalise well with producing on-line a correct input-output mapping. The map can be used for several activities such as landmark recognition, navigation and path planning. Future works based on this method includes investigating better paths including curves and tilted segments. 10- References [4][1] J.Borenstein & Y.Koren, The vector field histogram - Fast obstacle avoidance for mobile robots, IEEE Transactions of robotics and automation, vol. 7, N°. 3, 1991, 278-287. [10][2] L. Kleeman & R. Kuc, An optimal sonar array for target localization and classification, Proc. IEEE International Conference on Robotics and Automation, San Diego, USA, 1994, 3130-3135. [14][3] A.M. Sabatini, & O.Benedetto, Towards a robust methodology for mobile robot localization using sonar, Proc. ICRA, San Diego, CA, 1994, 3142-3147. [5][4] J.Borenstein & Y.Koren, Obstacle avoidance with ultrasonic sensors, IEEE journal of robotics and automation, vol 4, N°2, April 1988, pp 213-218.

[6][5] O. Bozma & R. Kuc, Building a sonar map in a specular environment using a single mobile sensor. IEEE PAM, vol 13 N°12, 1991, 1260-1269. [ELF 86][6] A. ELFES, A Sonar-Based Mapping and Navigation System, Proc. IEEE International Conference on Robotics and Automation”, vol. 3, avril 1986, 1151-1156. [ELF 87][7] A. ELFES., Sonar-based real-world mapping and navigation, IEEE Journal of robotics and automation, vol. RA3, juin 1987, p. 249-265.

[MOR 87][8] H.P. Moravec, Certainty grids for mobile robots, Proc. of the NASA/JPL Space Telerobotics Workshop, vol 1, 1987, 307-312.

[1][9] N.Achour & R.Toumi, Building an environment map using a sweeping system based on a single ultrasonic sensor. Proc. IEEE/ASME, Int Conf on Advanced Intelligent Mechatronics, Italy, 2001, 1329-1333. [2][10] N.Achour, N.K M’sirdi & R.Toumi, Reactive path planning with collision avoidance in dynamic environments. Proc. 10th IEEE International Workshop on Robot and Human Interactive Communication, France, 2001, 62-67. [13][11] H. Peremans, K. Andenaert & J.M. Van Campenhout, A high-resolution sensor based on tri-aural perception, IEEE Transactions on Robotics and Automation, vol. 9, 1993, 36-48. [KIN 82][12] L.E. KINSLER., A.R. FREY., A.B. COPPENS J.V. SANDERS, Fundamentals of acoustics (Singapore, John Wiley & Sons, 1982).

[7][13] J.Budenske & M.Gini , Why is it so difficult for a robot to pass through a doorway using ultrasonic sensors, Proc. ICRA , San Diego, CA, 1994, 3124-3129. [8][14] K.Demirli & Ï.B.Türksen, Sonar based mobile robot localization by using fuzzy triangulation, Robotics and Autonomous Systems 33, 2000, 109-123. [12][15] H.P. Moravec & A. Elfes, High resolution maps from wide angle sonar, Proc. of the 1985 IEEE International Conference on Robotics and Automation, 1985, 116-121. [9][16] A. Elfes, Sonar-based real-world mapping and navigation, IEEE Journal of robotics and automation, vol. RA-3, 1987, 249-264. [VEN 73][17] VENTSEL H., Théorie des probabilités, Editions MIR, MOSCOU, 1973.

[Wei 91][18] A.S. Weigend, D.E. Rumelhart and B.A. Huberman, Generalization by weight-elimination with application to forecasting, Advances in neural information processing systems, vol. 3, San Mateo, CA: Morgan Kaufmann, 1991, 875-882. [16][19] V.N. Vapnik and A.Ya. Chervonenkis, On the uniform convergence of relative frequencies of events to their probabilities, Theoretical Probability and its applications, vol.17, 1971, 264-280. [16][19] D.E. Rumelhart and J.L. McClelland, Parallel distributed processing: explorations in the microstructure of cognition, vol. 1, (Cambridge, MA: MIT Press, 1986). [17][20] D.E. Rumelhart, G.E. Hinton, and R.J. Williams, Learning representations of back-propagation errors, Nature (London), vol. 323, 1986, 533-536. [21][20] K.S. Narendra and K. Parthasarathy, Identification and control of dynamical systems using neural networks, IEEE Transactions on Neural Networks, vol.1, 1990, 4-27. Ajouter référence bib sur basic algo BP [21] [LEC 93][21] Y. LeCun, Efficient learning and Second-order Methods, A Tutorial at NIPS, Denver, 1993.

[3][22] J.O Berger, Statistical decision theory and bayesian analysis, Springer Verlag, 1985, berlin.