Robotic Vision and Hu - Ikit prod

Next I must index the gestures that robot must be able to interpret. (Stop, go by ... The robot analyzes the face of the human in order to know if he is happy or not,.
30KB taille 1 téléchargements 254 vues
Slide 1 I am Olivier Gueudelot, and I am going to present you the yearly project that I have chosen: "Robotic Vision and Human-Robot Interaction ". Slide 2 As introduction, I am going to remind you how much the electronic and the robotic have invaded our daily, and under what shapes. Then I will present you the main motivation and objective of my project. And finally, I will present you the plan that I am going to follow to accomplish this project. Slide 3.1 An emergent domain that attracts more and more the scientific community and computer scientists is what we calls the ambient intelligence. The ambient intelligence consists in sprinkling our daily environment (cars, trees, buildings, streets...) of small electronic, which are self-governing, sensitive to the context and having a certain degree of intelligence. Their utilities are multiple: of the prevention (fires, accidents) to the aid while passing by the comfort. The examples are already numerous and are going to increase more again: smoke detector, automatic light, centralized heating, the multiples gadgets of our cars: Airbag, gps , speed regulator, etc ... Slide 3.2 In the same time the market of robots for services and assistance is increasing . Their main goal is to facilitate the everyday life. We have for example the robot vacuum cleaners that pass the vacuum cleaner all alone in the house, or the robot lawnmower. To be really practical, the usage of these robots must be done very intuitive manner for the humans who use them . The research go in this direction, notably while developing robots able to see, to analyze and to understand the facts and gestures of the humans. This last aspect allows me to present the context of the subject of my project.

Slide 4 The objective of my project is to perform an algorithm of scene-analysis, so that a robot with a camera can analyze and understand some gestures done by a human and act consequently in a coordinated way.

to do that Slide 4.1 I must set up the cameras on the robots and to assure me that they works well. Next I must index the gestures that robot must be able to interpret. (Stop, go by there, comes here, unravels you,...) After I must set up an algorithm of form-recognition, and to combine it with an algorithm of scene-analysis The algorithm of form-recognition will allow the robot to spot the important objects to analyze (like the hands or the mouth) The algorithm of scene-analysis analyses the movements of these objects and it must be able to interpret the gesture that was done, and act consequently.

Slide 4.2 Another objective but it is optional fo the project is the emotion interpretation. The robot analyzes the face of the human in order to know if he is happy or not, So the robot knows if the carried out action has been well done. Algorithms are exactly the same that for the scene-analysis, But we analyse the faces and not the hand of the human.

Slide 6 To finish, I will present you the way that I may follow to realize this project. This is the global opration. The human does some gestures. The video captured by the camera is transferred through wifi to a remote computer where will be executed the algorithm of scene-analysis. The result of this analysis is an order, which it is sent to the robot.

Slide 7 1. the video captured by the camera is communicated through a wifi to the computer where the algorithms will be executed. 2. First, the video flow is split in severeal frame. 3. thes frame will be processed, in the first time seperatly. 4. We start with a pre-treatment, with the operators of the Pandore library of the Greyc Laboratory of this University. This pre-treatment will have for goal of 1. to improve the picture to facilitate its analysis (to remove the noises for example) 2. to extract the main objects of the picture. These objects are labelled. 5. Every labeled object is extracted in order to be analyzed thanks to an algorithm of recognition of form. In same time wee keep some data about this object for the analysis of the scene later. 6. The result of the processus is stocked 7. Each sequences of labeled object are analyzed by an algorithm of scene analyse. Which should detect a knowing gesture. 8. The result of this processus refer to a predefined action, that we send to the robot. 9. Who executes the order !