Explainable Interactive Machine Learning

18 aprile 2019
18 aprile 2019

Time: h. 3pm
Venue:Via Sommarive 5 - Polo Ferrari 1 (Povo, TN) - Room  Ofek 

Speaker

  • Stefano Teso, Department of Computer Science, KU Leuven

Abstract

Although interactive learning aims at putting the user into the loop, existing interaction protocols treat the learner as a black box.  For instance, in active learning the machine iteratively presents unlabelled instances to the user in order to receive their labels, but it never discloses its own predictions or beliefs. This prevents the user from building a mental model of the machine and thus from justifiably granting or revoking trust to it.  This talk will cover some recent work on explainable interactive learning, which aims to fix this issue. In this novel setting, the learner explains its own predictions to the user.  In turn, the user can both provide a label and correct the explanation.  We will show that this protocol not only helps the user to understand and evaluate the learner, but can also greatly enhance the model itself, preventing cases where the model produces good predictions for the wrong reasons.

About the Speaker

Stefano Teso received his Ph.D. in Computer Science from the University of Trento, Italy, in 2013. He spent one year as a postdoctoral researcher at the Fondazione Bruno Kessler, Trento and one year at the Computer Science department of the University of Trento. He is currently a postdoctoral fellow at the machine learning group at KU Leuven, Belgium.  His main interests include machine learning for structured and relational data, combining learning and constraint satisfaction/optimization, constraint learning, and interactive learning from human advice. He has published in top journals (AI Journal) and conferences (AAAI, IJCAI). Stefano won a Fondazione Caritro grant in 2014 for learning and reasoning over relational data in the tributary and administrative domains.

Contact Personandrea.passerini [at] unitn.it (Andrea Passerini)