ACTICIPATE is a project by Mirko Rakovic & José Santos-Victor inserted in the H2020 and Marie Curie programs, that’s intended to last 15 months, from June 2017 until August of 2018. Focusing on tasks that require motion, the research is based on examining nonverbal communication, specifically handshakes and other movements of the arms, as well as human gaze. “When someone is looking at a certain point that’s a very significant signal of intention and a factor that we need to understand much better.”  Mirko Rakovic explained how with ACTICIPATE the goal is to learn from humans to later invest in robots.

“We try to always focus first on human-human interaction and human studies, in order to learn from that and build the controllers for the robot, so that it can behave more naturally when interacting with humans. There isn’t much literature on human-human interaction and a there’s a big lack of data when discussing gaze and arm movements.”

One of the goals of the project was to perform at least two experiments with humans, focusing on interaction and the use of objects without any meaning in that context. “Now we are interested in studying behaviour when interacting with objects that are meaningful to the ‘story’ as well, so taking object affordances into account. For instance, if there’s a cup with hot water inside and you notice that because of the evaporation that will influence our interaction and how we signal shared attention.”

During the performance of some action, depending on what type of action that is, people exhibit different intentions through their gaze. The challenge is on building models that will allow for this kind of behaviour and behaviour understanding from the robot as well. “We use our movements with each other in order to show our intentions, but we use the same set of capabilities, and also our experience, to read the intention of another person. It’s a bi-directional capacity.”

Reading intentions and showing intentions is very important in a safe interaction between humans and robots. The more widespread this type of interaction becomes the less control over external factors programmers have, so it’s necessary to know as much as possible about what factors might influence interaction, and how.
For a robot, any environment that is occupied by humans is immediately considered a complex one, given the amount of uncertainty in what the human could do. If you have a human and a robot in a shared workspace with holdable objects without an absolutely certain location, that asks for a huge capacity from the robots in order to understand and act properly.

The idea is to use this framework to improve the safety in human-robot interaction by understanding how people react, so that intentions are as clear as possible when using possibly dangerous tools. This could have important applications when taking a robot into account, for instance in manufacturing and assembly, when lots of hazardous tools are being dealt with.

“Anything involving something risky and in a workplace shared by robots and humans is an important case study to understand how they influence each other, to prevent any negative interactions. If you have a robot with a knife that looks towards the area where a person is, we need to program the robot so that it’s clear in its intentions and that scenario is as safe as possible.”

When asked which are simpler to understand, robots or humans, Mirko answers that it’s always simpler to understand what you’re better at. “From my perspective, it’s simpler to build a controller that shows behaviour similar to humans, whereas reading the intention and understanding what another human is doing is a less studied branch of artificial intelligence and computer science. But we are getting there.”

And are humans more complex than robots? “When dealing with robots you can always expect to find a person or group of people that can tell you everything about a robot, but you cannot find a person that can explain our brain.”