Navigation

The AISEC project

AI Secure and Explainable by Construction

img

Motivation

AI applications have become pervasive: from mobile phones and home appliances to stock markets, autonomous cars, robots, and drones. Each application domain comes with a rich set of requirements such as legal policies, safety and security standards, company values, or simply public perception.

As AI takes over a wider range of tasks, we gradually approach the time when security laws, or policies, ultimately akin to Isaac Asimov’s “3 laws of robotics” will need to be established for all working AI systems. A homonym of Asimov’s first name, the AISEC project aims to build a sustainable, general purpose, and multidomain methodology and development environment for policy-to-property secure and explainable by construction development of complex AI systems.

Methodology

This project will employ types with supporting lightweight verification methods (such as SMT solvers) in order to create and deploy a novel framework for documenting, implementing and developing policies for complex deep learning systems. Types will serve as a unifying mechanism to embed security and safety contracts directly into programs that implement AI. The project will produce an integrated development environment with infrastructure to cater for different domain experts: from lawyers and security experts to verification experts and system engineers designing complex AI systems. It will be built, tested and used in collaboration with industrial partners in two key AI application areas: autonomous vehicles and natural language interfaces (aka chatbots).

img

Participating universities

Heriott-Watt University

logo

Heriott Watt’s Lab for AI and Verification is leading the project, alongside the NLP lab

University of Edinburgh

logo

Edinburgh University’s Security and privacy research group is co-hosting the project

University of Strathclyde

logo

Strathclyde’s Mathematically Structured Programming group is co-hosting the project

Academic partners

University of St Andrews

logo

Dr Edwin Brady is a project partner

The Hebrew University of Jerusalem

logo

Dr Guy Katz is a project partner

Boston University

logo

Dr Marco Gaboardi is a project partner

Leiden University

logo

Dr Henning Basold is a project partner

Secondments partners

IT University of Copenhagen

logo

Natalia Slusarz -- internship

Hebrew University of Jerusalem

logo

Omri Isac -- two internships

Schlumberger Limited

logo

Ben Coke -- internship

Industrial partners

Five AI

Amazon

Forsvarets forskningsinstitutt

HORIBA MIRA

Hugging Face

Imandra

NEC Laboratories

Symphonic

Apple

Microsoft Research

Schlumberger Limited

Google

Funding

Funded by UKRI ESPRC

Engineering and Physical Sciences Research Council

This is an ambitious proposal that can make a good contribution to the verification of AI systems.

EPSRC Peer Review

I want to see this proposal funded and to see what the investigators can achieve against their objectives. The project isn't the final answer in work towards secure AI systems, but it looks like a very important piece of the jigsaw puzzle.

EPSRC Peer Review

Design

The AISEC character and logo were designed by Anna Komendantskaya as a re-interpretation of Asimov's book covers in the 60s and 70s.

img