Building intelligent systems (that can explain)
Ilaria Tiddi
Vrije Universiteit of Amsterdam
Webpage
Wednesday 6 March 2019
10:00 - 11:00
Room 3.03
Earl Mountbatten Building
Abstract
Explanations have been subject of study in a variety of fields (e.g. philosophy, psychology and social science), experiencing a new wave of popularity in Artificial Intelligence thanks to the success of machine learning (see DARPA's eXplainable AI). Yet, the events of recent times have shown that the effectiveness of intelligent systems is still limited due to their inability to explain their decisions to human users, hence losing in understandability and trustworthiness. In this talk, I will give an overview of my research, aiming at developing systems able to automatically generate explanations using external background knowledge. In particular, I will show how such systems can be based on the existing research on explanations, combined with AI techniques and the large-scale knowledge sources available nowadays.
Bio
I am a Research Associate in the Knowledge Representation and
Reasoning group of the Vrije Universiteit of Amsterdam (NL). My
research focuses on creating transparent AI systems that generate
explanations through a combination of machine learning, semantic
technologies, and knowledge from large, heterogeneous knowledge
graphs. As part of my research activities, I am member of the
CEUR-WS Editorial Board and the Knowledge Capture conference (K-CAP)
Steering Committee, while I have organised workshop series (Recoding
Black Mirror, Application of Semantic Web Technologies in Robotics,
Linked Data 4 Knowledge Discovery) and Summer Schools (the 2015 and
2016 Semantic Web Summer School).
Twitter: @IlaTiddi
Website : https://kmitd.github.io/ilaria
Host: Alasdair Gray