Where: online on Zoom
Time: 1:30 pm
- Xavier Alameda-Pineda, Inria Grenoble Rhône-Alpes, Université Grenoble-Alpes
Robots with autonomous communication capabilities interacting with multiple persons at the same time in the wild are both a societal mirage and a scientific Ithaca.
Indeed, despite the presence of various companion robots on the market, their social skills are derived from machine learning techniques functioning mostly under laboratory conditions.
Moreover, current robotic platforms operate in confined environments, where on one side, qualified personnel received detailed instructions on how to interact with the robot as part of their technical training, and on the other side, external sensors and actuators may be available to ease the interaction between the robot and the environment
Trespassing these two constraints would allow a robotic platform to freely interact with multiple humans in a wide variety of every-day situations, e.g. as an office assistant, a health-care helper, a janitor or a waiter/waitress, that is to be socially intelligent.
In the H2020 SPRING and ANR ML3RI projects, we investigate new machine learning methods for audio, visual and audio-visual perception, as well as for multi-modal robot control, that would allow the robot the better perceive its environment and to take actions that are socially acceptable.
In this seminar, the preliminary results of both projects will be presented and discussed, showcasing the limitations of current methodologies, and drawing interesting future research lines.
Register in advance for this meeting. After registering, you will receive a confirmation email containing information about joining the meeting.
About the speaker
Xavier Alameda-Pineda is a (tenured) Research Scientist at Inria, in the Perception Group. He obtained the M.Sc. (equivalent) in Mathematics in 2008, in Telecommunications in 2009 from BarcelonaTech and in Computer Science in 2010 from Université Grenoble-Alpes (UGA). He the worked towards his Ph.D. in Mathematics and Computer Science, and obtained it 2013, from UGA. After a two-year post-doc period at the Multimodal Human Understanding Group, at University of Trento, he was appointed with his current position. Xavier is an active member of SIGMM, and a senior member of IEEE and a member of ELLIS. He is co-chairing the “Audio-visual machine perception and interaction for companion robots” chair of the Multidisciplinary Institute of Artificial Intelligence. Xavier is the Coordinator of the H2020 Project SPRING: Socially Pertinent Robots in Gerontological Healthcare. Xavier’s research interests are in combining machine learning, computer vision and audio processing for scene and behavior analysis and human-robot interaction
Contact: iecs.school [at] unitn.it
PI Stories. A series of seminars aimed at providing the opportunity to the PhD students to learn the success stories of some of the most talented researchers in the world. Each speaker will present a research project he/she led as a principal investigator. The presentation will cover the scientific scope of the project and the most important results the project achieved. The speakers will also share their own experience of turning a research idea into a successful project winning a competitive grant.