AI applications have become pervasive: from mobile phones and home appliances to stock markets, autonomous cars, robots, and drones. Each application domain comes with a rich set of requirements such as legal policies, safety and security standards, company values, or simply public perception.
As AI takes over a wider range of tasks, we gradually approach the time when security laws, or policies, ultimately akin to Isaac Asimov's "3 laws of robotics" will need to be established for all working AI systems. A homonym of Asimov's first name, the AISEC project aims to build a sustainable, general purpose, and multidomain methodology and development environment for policy-to-property secure and explainable by construction development of complex AI systems.
This project will employ types with supporting lightweight verification methods (such as SMT solvers) in order to create and deploy a novel framework for documenting, implementing and developing policies for complex deep learning systems. Types will serve as a unifying mechanism to embed security and safety contracts directly into programs that implement AI. The project will produce an integrated development environment with infrastructure to cater for different domain experts: from lawyers and security experts to verification experts and system engineers designing complex AI systems. It will be built, tested and used in collaboration with industrial partners in two key AI application areas: autonomous vehicles and natural language interfaces (aka chatbots).
Funded by UKRI ESPRC
This is an ambitious proposal that can make a good contribution to the verification of AI systems.
I want to see this proposal funded and to see what the investigators can achieve against their objectives. The project isn't the final answer in work towards secure AI systems, but it looks like a very important piece of the jigsaw puzzle.
The AISEC character and logo were designed by Anna Komendantskaya as a re-interpretation of Asimov's book covers in the 60s and 70s.