One of the major limitations of the current robotics systems is their reduced capabilities of perceiving the surrounding space. This limitation determines the maximum complexity of the tasks they may perform, and reduces the robustness when the current tasks are performed.
Therefore, by increasing the perceptual capabilities, these systems could react to environmental changes, and accomplish the desired tasks. For many living species, namely the human being, visual perception plays a key role in their behavior. Very often we rely intensively on our visual capabilities to move around in the world, track moving objects, handle tools, avoid obstacles, etc.
To improve the flexibility and robustness of robotics systems, this project aims at studying and implementing Computer Vision techniques in various tasks for Mobile Robotic Systems. The goal is to study not only the visual perception techniques, per si, but also to explore the intimate relationship between perception and the control of action : the Perception-Action cycle.
For many years, most research efforts on Computer Vision for Robotic Agents were focused on recovering a simbolic model of the surrounding environment. This model could then be used by higher level cognitive systems to plan the actions for the agent, based on the pursued goals and the world state. This approach, however, has revealed many problems in dealing with dynamic environments where unpredictable events may occur.
More recent approaches, trying to achieve robust operation in dynamic, weakly structured environments, consider a set of behaviours, in which perception (vision in this case) and action are tightly connected and mutually constraining, similarly to many successful biological systems. Therefore visual information is fed directly into the various control systems of the different behaviours, thus leading to a more robust performance.
Specifically, in this project, an Architecture for Visual Behaviours for a mobile vehicle equipped with an agile camera, will be considered. Each behaviour allocates the perception and action resources strictly needed for the associated task, such as detecting and tracking a moving target, detecting interesting points in the scene, docking to a specific point, detecting obstacles, navigating along corridors, self- localization, etc. The robustness and performance of the overall system emerges from the coordination, integration and competition between these various visual behaviours.