|Name||Visual Behaviors for Mobile Robots|
|Funding Reference||JNICT PBIC/C/TPR/2550/95|
One of the major limitations of the current robotics systems is their reduced capabilities of perceiving the surrounding space. This limitation determines the maximum complexity of the tasks they may perform, and reduces the robustness when the current tasks are performed.
Therefore, by increasing the perceptual capabilities, these systems could react to environmental changes, and accomplish the desired tasks. For many living species, namely the human being, visual perception plays a key role in their behavior. Very often we rely intensively on our visual capabilities to move around in the world, track moving objects, handle tools, avoid obstacles, etc.
To improve the flexibility and robustness of robotics systems, this project aims at studying and implementing Computer Vision techniques in various tasks for Mobile Robotic Systems. The goal is to study not only the visual perception techniques, per si, but also to explore the intimate relationship between perception and the control of action : the Perception-Action cycle.
For many years, most research efforts on Computer Vision for Robotic Agents were focused on recovering a simbolic model of the surrounding environment. This model could then be used by higher level cognitive systems to plan the actions for the agent, based on the pursued goals and the world state. This approach, however, has revealed many problems in dealing with dynamic environments where unpredictable events may occur.
More recent approaches, trying to achieve robust operation in dynamic, weakly structured environments, consider a set of behaviours, in which perception (vision in this case) and action are tightly connected and mutually constraining, similarly to many successful biological systems. Therefore visual information is fed directly into the various control systems of the different behaviours, thus leading to a more robust performance.
Specifically, in this project, an Architecture for Visual Behaviours for a mobile vehicle equipped with an agile camera, will be considered. Each behaviour allocates the perception and action resources strictly needed for the associated task, such as detecting and tracking a moving target, detecting interesting points in the scene, docking to a specific point, detecting obstacles, navigating along corridors, self- localization, etc. The robustness and performance of the overall system emerges from the coordination, integration and competition between these various visual behaviours.
Computer and Robot Vision Lab (VisLab)
Visual Behaviors for Mobile Robots
 Etienne Grossmann, José Santos-Victor, Performance Evaluation of Optical Flow Estimators: Assessment of a New Affine Flow Method, Robotics and Autonomous Systems, Elsevier, (21)1, 1997 - PDF
 Cesar Silva, José Santos-Victor, Robust Egomotion Estimation from the Normal Flow using Search Subspaces, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19(9), 1997 - PDF
 Visual Behaviours for Binocular Tracking, Robotics and Autonomous Systems, Elsevier, (25)3-4, 1998 - PDF
 Alexandre Bernardino, José Santos-Victor, Sensor geometry for dynamic vergence: characterization and performance analysis, Workshop of Performance Characteristics of Vision Algorithms, ECCV96, Cambridge, UK, 1996 - PDF
 Cesar Silva, José Santos-Victor, Geometric Approach for Egomotion Estimation using Normal Flow, 4th International Symposium on Intelligent Robotic Systems - SIRS96, Lisbon, Portugal, 1996 - PDF