By spontaneously generating and also observing and imitating manipulative gestures, infants quickly learn how to interact with their environment. At the same time, communicative abilities develop leading to a more and more complex use of speech. Are these motor abilities developing independently? Or are there fundamentally similar mechanisms in the development of perception and production, for both speech and manipulation? We believe that the later is the case.
This project represents an ambitious attempt to investigate this parallel development of manipulatory and speechrelated gestures from a multi-disciplinary perspective. Experts in Robotics, Neuroscience, and Child-Development will tightly collaborate to build an artifact (an embodied system) which will develop its motor capabilities both in manipulation and speech, finally leading to a system that learns to:
i) Perceive and produce simple manipulatory gestures;
ii) Perceive and produce a simple vocabulary of words;
ii) Infer the goal of the gestures, and the meaning/context of the words.
Production and perception will be investigated at the same time, driven by the hypothesis of a tight sensorimotor link, involving parieto-frontal circuits, working bidirectionally between these two processes. The approach we propose will give us the possibility to experimentally manipulate the learning/development process on the basis of the experimental achievements and theoretical hypotheses formulated by the Neuroscience/Child-Development partners. At the same time, theoretical questions and implementation problems arising during artifact building and development will challenge neuroscience labs and will stimulate their exploration in living systems.