Acronym DecPUCS
Name Decentralized Planning Under Uncertainty for Cooperative Systems
Funding Reference PTDC/EEA-ACR/73266/2006
URL http://decpucs.isr.ist.utl.pt/
Dates 2007-10|2010-09
Summary

In this project we will study planning under uncertainty for groups of cooperating multi-agent systems. Developing intelligent robots or other real-world systems that plan and perform an assigned task is a major goal of Artificial Intelligence and Robotics. We will develop general methodology and algorithms, and tackle two case studies relevant to society: multi-robot urban search and rescue, and irrigation channel control.

Planning, or sequential decision-making skills form a crucial component of any intelligent system: how should a system act over time in order to perform its task as well as possible. When a system is part of a team, its performance depends on the actions chosen by its teammates. An important aspect of decision making in a real-world system is the fact that the system should be able to deal with uncertainty from numerous sources. For instance, a major source of uncertainty for a robot are its sensors, which are often noisy and have only a limited view of the environment. A robot is also often uncertain about the effect that executing an action has on its environment. A third source of uncertainty are an agents teammates, as, in general, an agent will not be able to predict with full certainty what actions its teammates will perform. Furthermore, one has to consider the communication abilities available to each system and restrictions on available bandwidth or network reliability.

We will develop algorithms that allow systems to handle uncertainty in sensors, actuators, communication, and teammate behavior in a principled way. We capture uncertainty in probabilistic models, which allows us to model the sequential decision-making problem as a centralized or decentralized partially observable Markov decision process (POMDP). Decentralized POMDPs (DEC-POMDPs) form a general framework for representing cooperative planning under uncertainty problems. In this project, we will focus on the following issues: (1) developing approximate planning algorithms for relevant subsets of the general DEC-POMDP model, (2) examining the tradeoff between centralized vs. decentralized planning algorithms and (3) tackling various communication models. Furthermore, we will see how these techniques can be used in two case studies.

Research Groups Computer and Robot Vision Lab (VisLab)
Intelligent Robots and Systems Group (IRSg)
ISR/IST Responsible
Matthijs Spaan
People
Pedro Lima
Luis Montesano
João Sequeira
Carlos Bispo
[1] N. Vlassis, "Multiagent Planning under Uncertainty with Stochastic Communication Delays", Proc. of ICAPS 2008 - 18th International Conference on Automated Planning and Scheduling, Sydney, Australia, 2008
[2] Francisco Melo, "Interaction-Driven Markov Games for Decentralized Multiagent Planning under Uncertainty", Proc. of AAMAS 2008 - the 7th International Conference on Autonomous Agents and Multiagent Systems, Estoril, Portugal, 2008 - PDF
[3] "Exploiting Locality of Interaction in Factored Dec-POMDPs", Proc. of AAMAS 2008 - the 7th International Conference on Autonomous Agents and Multiagent Systems, Estoril, Portugal, 2008
[4] N. Vlassis, "Optimal and Approximate Q-value Functions for Decentralized POMDPs", Journal of Artificial Intelligence Research, Vol. 32, pp. 289-353, 2008
Go to Top