Other

DISI Seminars

2021 First Series
3 February 2021
17 February 2021
February 3 and 17, 2021
Target audience: 
University community
Attendance: 
Online – Registration required

Where: online, live streaming on DISI Youtube Channel
Time: 1:30 pm (CET Rome timezone)

3 February 2021 at 1:30 - 2:30 pm  

  • Explainable Machine Learning for Trustworthy AI 
    by Dr. Fosca Giannotti, ISTI-CNR Pisa
Black box AI systems for automated decision making, often based on machine learning over (big) data, map a user’s features into a class or a score without exposing the reasons why. This is problematic not only for the lack of transparency, but also for possible biases inherited by the algorithms from human prejudices and collection artifacts hidden in the training data, which may lead to unfair or wrong decisions. The future of AI lies in enabling people to collaborate with machines to solve complex problems. Like any efficient collaboration, this requires good communication, trust, clarity and understanding. Explainable AI addresses such challenges and for years different AI communities have studied such topics, leading to different definitions, evaluation protocols, motivations, and results. This lecture provides a reasoned introduction to the work of Explainable AI (XAI) to date, and surveys the literature with a focus on machine learning and symbolic AI related approaches. We motivate the needs of XAI in real-world and large-scale application, while presenting state-of-the-art techniques and best practices, as well as discussing the many open challenges.
 

17 February 2021 at 1:30 - 2:30 pm 

  • The two sides of fairness
    by Dr. Francesca Lagioia, European University Institute (EUI) Florence

The principle of fairness, together with transparency and AI explainability, is considered as the guiding landmark of the current EU regulatory policy towards AI. Fairness is invoked in policy guidelines as one of the inspiring values that should guide the foundation of legal, ethical and robust AI. This presentation delves into this debate from two different perspectives. Indeed, the fairness principle should be intended under the substantive and procedural dimensions. The substantive dimension implies a commitment to ensure equal and just distribution of rights and obligations, benefits and costs, access to information and opportunities, and ensuring that individuals and groups are free from unfair bias, discrimination, stigmatisation, and manipulation. Under this dimension, unfairness can be viewed as a societal phenomenon characterised by a power imbalance. In this regard, the presentation will analyse possible causes of unfairness in algorithmic decision-making, with a particular focus on the COMPAS system. Conversely, the procedural dimension of fairness entails the ability to contest and seek effective redress against power abuses and decisions, including made by AI systems and by humans operating them. Despite a number of European regulations in force, legal mechanisms created to prevent unfairness too often have failed to effectively counter this practice. Even though, by now, AI has been considered largely responsible for the development of unfair practices towards individuals, a paradigm shift, is possible. Artificial intelligence technologies may indeed be brought to the side of citizens and their organizations, with the aim of building an efficient and effective counter-power. In this regard, the Claudette system will be presented as an example of a machine learning-based system, aimed at partially automating the detection of unfairness and unlawfulness in consumers’ contracts and privacy policies.

Speakers

Fosca Giannotti is a director of research of computer science at the Information Science and Technology Institute “A. Faedo” of the National Research Council, Pisa, Italy. Fosca Giannotti is a pioneering scientist in mobility data mining, social network analysis and privacy-preserving data mining. Fosca leads the Pisa KDD Lab - Knowledge Discovery and Data Mining Laboratory http://kdd.isti.cnr.it, a joint research initiative of the University of Pisa and ISTI-CNR, founded in 1994 as one of the earliest research lab centered on data mining. Fosca's research focus is on social mining from big data: smart cities, human dynamics, social and economic networks, ethics and trust, diffusion of innovations. She has coordinated tens of European projects and industrial collaborations. Fosca is currently the coordinator of SoBigData, the European research infrastructure on Big Data Analytics and Social Mining http://www.sobigdata.eu​, an ecosystem of ten cutting edge European research centres providing an open platform for interdisciplinary data science and data-driven innovation. Recently she is the PI of ERC Advanced Grant entitled XAI – Science and technology for the explanation of AI decision making. She is member of the steering board of CINI-AIIS lab.  On March 8, 2019 she has been featured as one of the 19 Inspiring women in AI, BigData, Data Science, Machine Learning by KDnuggets.com, the leading site on AI, Data Mining and Machine Learning

Francesca Lagiola Francesca Lagioia is Senior Research Fellow at the European University Institute (EUI), Florence (Italy), where she is working on the CompuLaw ERC project and the Claudette project. She is Adjunct Professor in Legal Informatics and AI and Law and Internet Law and Society, at the University of Bologna, Department of Legal Studies. She has been Max Weber Postdoctoral Fellow (1st September 2017-until 31st August 2018) at the European University Institute (EUI), Florence (Italy). In March 2016, she earned her Ph.D. in Law Science and Technology from the University of Bologna.
Her research interests include: artificial intelligence and law, computer law and internet law, in particular privacy and data protection law, human rights and discrimination, consumer law; Artificial Intelligence and Democracy, algorithmic fairness and transparency and AI explainability, computable models of legal reasoning and knowledge, legal theory, law and automation in socio-technical systems, with a specific focus on normative and deliberative agents, and the liability issues arising in connection with the use of autonomous systems.

Download 
application/pdfPoster DISI Seminars First Series(PDF | 205 KB)