Action understanding in human and robot dyadic interaction

ACTICIPATE

Humans have fascinating skills for grasping and manipulation of objects, even in complex, dynamic environments, and execute coordinated movements of the head, eyes, arms, and hands, in order to accomplish everyday tasks. When working on a shared space, during dyadic interaction tasks, humans engage in non-verbal communication, by understanding and anticipating the actions of working partners, and coupling their actions in a meaningful way.
The key to this mind-boggling performance is two-fold: (i) a capacity to adapt and plan the motion according to unexpected events in the environment, (ii) and the use of a common motor repertoire and action model, to understand and anticipate the actions and intentions of others as if they were our own. Despite decades of progress, robots are still far from the level of performance that would enable them to work with humans in routine activities.
ACTICIPATE addresses the challenge of designing robots that can share workspaces and co-work with humans. We rely on human experiments to learn a model/controller that allows a humanoid to generate and adapt its upper body motion, in dynamic environments, during reaching and manipulation tasks, and to understand, predict and anticipate the actions of a human co-worker, as needed in manufacturing, assistive and service robotics, and domestic applications.
These application scenarios call for three main capabilities that will be tackled in ACTICIPATE: a motion generation mechanism (primitives), with a built-in capacity for instant reaction to changes in dynamic environments; a framework to combine primitives and execute coordinated movements of head, eyes, arm and hand, in a way similar (thus predictable) to human movements, and model the action/movement coupling between co-workers in dyadic interaction tasks; and the ability to understand and anticipate human actions, based on a common motor system/model that is also used to synthesize the robot’s goal-directed actions in a natural way.

Reference:
H2020-EU.1.3.2. – 752611
From: 2017-06
To: 2018-08
Funding: 100,397.25
Funders: EU

Computer and Robot Vision Lab (VisLab)

Computer and Robot Vision Lab (VisLab) Logo