Ioannis Chalkiadakis

I am a PhD candidate at the Edinburgh Center for Robotics (Strategic Futures Lab, Heriot-Watt University), where I work on explainable machine learning with Professor Mike Chantler and Dr. Ioannis Konstas

I completed my MEng in Electrical and Computer Engineering at the National Technical University of Athens, where I worked with Professor Alexandros Potamianos and his group in the Computer Vision, Speech Communication & Signal Processing Lab.

I am a registered engineer at the Technical Chamber of Greece, and I have previously worked at an innovative speech and emotion recognition company, as well as for the LHCb experiment at CERN.

Email  /  CV  /  GitLab  /  LinkedIn


I'm interested in exploring how complex machine learning models work in order to interpret their successes and failure modes. Building an in-depth understanding of these models is the first step to improve them and eventually trust them for safety-critical applications.


tRustNN: Building trust in Recurrent Neural Networks through data-driven, human-interpretable visualizations.
MSc dissertation, August 2017
Poster, December 2017

The principal goal of the project was provide a data-driven, interactive visualization environment to investigate the operation of a Recurrent Neural Network. The framework focused on a Natural Language, text-based, emotion classification task.


A brief survey of visualization methods for deep learning models from the perspective of Explainable AI.
Survey, February 2017

A literature review of visualization techniques was conducted in preparation for the MSc project for the Robotics and Autonomous Systems program of the Edinburgh Center for Robotics. The focus was specifically on non-expert understandable visualizations and on Recurrent Neural Networks, with the goal to identify a research gap in visualizations for Explainable AI.


Convolutional Neural Networks for Object Detection: A visual cortex perspective.
Overview, December 2016

An overview of state-of-the-art object detection CNN models, where we tried to draw parallels between CNN operation and the human visual cortex. The goal was to provide a high-level intuition as to how CNN operate and see how knowledge coming from Neuroscience could aid in CNN architecture design.


A manifold-regularized, deep neural network acoustic model for automatic speech recognition.
MEng Diploma project, July 2016

The project studied deep architectures of neural networks and their application to automatic speech recognition while adopting a manifold-learning approach for the training of the network (Tomar and Rose, 2014), given the established relation between manifolds and speech data.

You've got to visit this site!