The ongoing project INSIDE is developing solutions from research to use in real-world interaction scenarios. The initiative was developed from a collaborative effort with the Computer Science Department at CMU and several Portuguese research institutions and companies and one hospital.

“The hospital had already setup ongoing sessions, in which the children, accompanied by their parents and a therapist where presented activities that intend to instigate their ability to interact, to be motivated and say things with autonomy. These tasks, like playing with puzzles or finding objects around the room, stimulate the children to be able to solve problems in real-world situations. What the hospital proposed was to do that with the robot as an intermediary.”

The project coordinator at ISR, Professor Pedro Lima, explained that it was from the contact with health professionals at Hospital Garcia de Horta, in Almada, that the idea to create a project where a robot was used to interact with children within the spectrum of autism disorders came up. After identifying children from all ranges of the spectrum, the researchers performed some initial experiments with the robot which, at this initial stage, involved a lot of human control and operation. Having robots working with children with developmental disorders is something that had already been done, and that might be because of the particularities of interacting with robots. For these kids maintaining a dialogue with other people is not an easy task, so having a machine intermediary could help them become more at ease.

With INSIDE, very early in the interaction process, the doctors and therapists noticed unusual behaviours in the children. “They don’t respond the gaze of other people, or even look them in the eyes, but they seemed to more easily look at the eyes of the robot and respond to its gaze. Of course, further studies need to be done in order to find out the clinical implications of those behaviours, but it’s already a response that’s out of the ordinary.”

When asked about having to work with professionals from a very different field, Professor Pedro Lima answers that the work with the clinicians was positive from the beginning. The team from the Hospital even hired a doctor to do research particularly related to this theme. “In some areas, we really do observe the potential of connection with robotics and achieving goals from that partnership. Especially in particular situations, for instance, when you have a person alone at home that needs help because they have difficulty moving or other healthcare aid, robots (considering that a very vast area from home aid to hospital presence) have a very important potential.”

 

 

At this stage of the INSIDE project, the medical team will repeat the experiments, but without the robot intermediary, to check if there are differences in the results. From the research point of view, what was most innovative and publication worthy were the results was the degree of autonomy achieved by the robot. “Most of the robots used in this sort of initiative either usually have a great degree of teleoperation and human action involved or are too static. With INSIDE, even beyond our initial expectations, the system was interacting with the children in an almost completely autonomous way.”

Over time the team developed two types of interfaces, one for the actions taken by the robot and another to do with its perceptions. The interaction between the children and the robot was set up in a room already used for a similar type of social/game stimulation, where the parents and the therapist join the child and the robot. In that room, cameras were used to film the humans (with consent) and help indicate their current location. This was a determining factor when it came to the autonomy of the robot. While the team of researchers are in another room, accompanied by a second therapist, where the parents and the child don’t necessarily see them, they are able to follow the interface and remotely watch what the robot is doing. The interface will show, for instance, that the robot detected an opening door, which could, for instance, be an indicator to greet a child that is coming in. “The robot asks for help from the child for instance to help it, for example, build a puzzle. One difficult issue, for robotics in general, but in this situation in particular, is speech recognition. In this situation, we had a therapist with a set of headphones and speaker/microphone to say certain keywords that the robot more easily recognizes so that it can react.”

In terms of the action component of the interface, the decisions of the robot are shown, in real time, on a screen. When the robot takes a decision within a certain role of actions it’s still possible to overwrite that decision based on indicators from the therapist. While at the beginning the therapists had to give a lot of indications and the researchers were very focused on controlling the robot’s actions and teleoperation some movements, the process became more and more optimized.“At a middle stage when the robot started doing more things the team was obviously very satisfied to observe the robot behaving autonomously, but better than that was to watch the therapists and how surprised they were that there was barely any human intervention during the interaction with the children.”

These interfaces are very informative and easy to use, which allowed the researchers to realize how much they had to intervene or not and through a process of constant correction and improvement gradually reduce how much they had to intervene so that the robot could act in a very autonomous way.  “The area of intersection with health has a lot to offer, from the robotics point of view, with particular interest and points of development in research. In competitions like RoboCup, where you have an at-home modality, we almost always see a component of helping an elderly person or another vulnerable member of society. So the domestic component is more and more interrelated with the field of health.”

 

A project like also this involves a lot of theoretical research in themes likes machine learning and uncertainty, that are very current state of the art issues in robotics. “All these methodologies look at the state of the world and decide, based on that state, what the action should be. This is done with a basis in certain principles to guarantee certain properties. The problem is that the state of the world is something very complex that is described by a huge amount of variables and that is a very difficult issue to solve. So we are working on methods of classifying elements but that can also reduce the amount of information by identifying and disregarding what is redundant. Reducing the dimension to take the decision is very relevant.”

Inserting the clinicians in the process was very important to reduce the apprehension from the humans involved in the interaction, in order to assure that a robot is a tool and not a human substitute. “It is legitimate from who isn’t involved in the process to have a though process of defending how human contact will always be necessary. This is an ongoing discussion and one that shouldn’t be abandoned because it is difficult since it is very relevant and one that could be improved by informed dialogue in order to adapt intelligently and with enough time.”