Navigation

AISEC Annual Project Workshop and Industrial Board

9th-10th of September, 2021 - Edinburgh, Scotland

image

Scope

The AISEC industrial board aims to bring together project partners, researchers and collaborations to share progress and ideas. The event will be split over three days on 8th, 9th and 10th of September. Wednesday will be a technical seminar on verification and programming languages. Thursday will be talks and discussions by the AISEC team and other partnered researchers on their work over the last year, while Friday will focus on broad panel discussions with all industrial partners on the challenges of building secure AI systems. A full schedule is attached. For those who wish to attend both days, accommodation can be arranged on site at Heriot-Watt.

Venue

The workshop will be run virtually and in a face-to-face mode. The face-to-face meetings will be held at the Marriott Edinburgh hotel, 111 Glasgow Rd, Edinburgh EH12 8NF. Please tell us in the registration form whether you are attending virtually or physically.

Registration

Registration is now open and free. Register here.

Local Organisers

Alasdair Hill - ath7@hw.ac.uk
Ekaterina Komendantskaya - e.komendantskaya@hw.ac.uk
Matthew Daggitt - m.daggitt@hw.ac.uk
Marco Casadio - mc248@hw.ac.uk

Schedule

9th of September
10th of September

Day 1 - Research Talks

10:00 - 10:15

small_logo

Everybody

Introductions

10:15 - 10:40

small_logo

Luca Arnaboldi

The University of Edinburgh

Security - Internal talk: Structured Reasoning About Adversaries in Autonomous Vehicles

There are currently no set laws for testing and certifying a self-driving vehicle, even less so when considering adversarial settings. In this talk I will discuss the difficulties in generating a certification framework for autonomous vehicles and downsides of current approaches. The talk will conclude by discussing our proposal for an adversarial testbed and structured reasoning approach to hopefully guide decision making, ground legal arguments and develop future guidelines.

10:40 - 11:05

small_logo

Lorenzo Cavallaro

University College London

Security - External talk: Intriguing Properties of Adversarial ML Attacks in the Problem Space

Recent research efforts on adversarial ML have investigated problem-space attacks, focusing on the generation of real evasive objects in domains where, unlike images, there is no clear inverse mapping to the feature space (e.g., software). However, the design, comparison, theoretical and real-world implications of problem-space attacks remain underexplored. In this talk, I will present two major contributions from our recent work [1]. First, I will present our novel reformulation of adversarial ML evasion attacks in the problem-space (also known as realizable attacks). This requires to consider and reason about additional constraints feature-space attacks ignore, which shed light on the relationship between feature-space and problem-space attacks. Second, building on our reformulation, I will present a novel problem-space attack for generating end-to-end evasive Android malware, showing that it is feasible to generate evasive malware at scale, while evading state-of-the-art defenses.

[1] Fabio Pierazzi, Feargus Pendlebury, Jacopo Cortellazzi, and Lorenzo Cavallaro. “Intriguing Properties of Adversarial ML Attacks in the Problem Space”. IEEE Symp. Security & Privacy (Oakland), 2020.

11:05 - 11:30

small_logo

David Aspinall

The University of Edinburghv

Security - General discussion

11:30 - 11:45

Coffee break

11:45 - 12:10

small_logo

Matthew Daggitt

Heriot-Watt University

Verification - Internal talk: Embedding constraints in neural networks: from training to termination

In the last 5 years the pressing need to ensure the safety of AI systems has led to the development of a plethora of different tools targeting the problem of how to constrain the relationship between a neural network’s inputs and outputs. This includes: training it to obey the relationship, verifying the relationship holds, exploring counterexamples, and verifying systems that make use of the network. In this talk I will describe how our proposed programming language Vehicle, aims to bridge the gaps between all these different tools and provide a single high-level human readable interface for embedding and monitoring the properties of a network all the way through its lifecycle.

12:10 - 12:35

small_logo

Vaishak Belle

The University of Edinburgh

Verification - External talk: Logic & Learning: From Aristotle to Neural Networks

The tension between deduction and induction is perhaps the most fundamental issue in areas such as philosophy, cognition and artificial intelligence (AI). The deduction camp concerns itself with questions about the expressiveness of formal languages for capturing knowledge about the world, together with proof systems for reasoning from such knowledge bases. The learning camp attempts to generalize from examples about partial descriptions about the world. In AI, historically, these camps have loosely divided the development of the field, but advances in cross-over areas such as statistical relational learning, neuro-symbolic systems, and high-level control have illustrated that the dichotomy is not very constructive, and perhaps even ill-formed. In this article, we survey work that provides further evidence for the connections between logic and learning. Our narrative is structured in terms of three strands: logic versus learning, machine learning for logic, and logic for machine learning, but naturally, there is considerable overlap. We place an emphasis on the following “sore” point: there is a common misconception that logic is for discrete properties, whereas probability theory and machine learning, more generally, is for continuous properties. We report on results that challenge this view on the limitations of logic, and expose the role that logic can play for learning in infinite domains.

12:35 - 13:00

small_logo

Ekaterina Komendantskya

Verification - general discussion

13:00 - 14:00

Lunch & socialising

14:00 - 14:25

small_logo

Wen Kokke

University of Strathclyde

Programming languages - Internal talk: A vehicle for solver integration

We will discuss our plans for vehicle and the difficulty around proof assistant/solver integration.

14:25 - 14:50

small_logo

Mohammad Abdulaziz

Technical University of Munich

Programming languages - External talk: Formalising the semantics of AI planning languages

Planning is a fundamental area of artificial intelligence with applications in many areas, e.g. logistics and autonomous systems. Although there has been steady progress in the performance of planning systems, as witnessed by consecutive planning competitions, a hurdle to their large scale adoption in safety-critical applications is that not as much work has been invested in their trustworthiness. In this presentation I will describe how to boost trustworthiness of planning systems using software formal verification techniques, in particular using proof assistants. More specifically, I will talk about: 1. specifying the semantics of fragments of the Planning Domain Definition Language in the interactive theorem prover Isabelle/HOL and 2. deriving a formally verified plan checker from this specification of the semantics.

14:50 - 15:15

small_logo

Bob Atkey

University of Strathclyde

Programming languages - General discussion

15:15 - 15:30

Coffee break

15:30 - 15:55

small_logo

Scott McLachlan

The University of Edinburgh

Law & AI - Internal talk: (De)Constructing the Road Rules for Autonomous System Development and Verification

Driving is an intuitive task that requires skills, constant alertness and vigilance for unexpected events. The driving task also requires long concentration spans focusing on the entire task for prolonged periods, and sophisticated negotiation skills with other road users, including wild animals. These requirements are particularly important when approaching intersections, overtaking, giving way, merging, turningand while adhering to the vast body of road rules. Modern motor vehicles now include an array of smart assistive and autonomous driving systems capable of subsuming some, most, or in limited cases, all of the driving task. The UK Department of Transport’s response to the Safe Use of Automated Lane Keeping System consultation proposes that these systems are tested for compliance with relevant traffic rules. Building these smart automotive systems require software developers with highly technical software engineering skills, and now a lawyer’s in-depth knowledge of traffic legislation as well. These skills are required to ensure the systems are able to safely perform their tasks while being observant of the law. This paper presents an approach for deconstructing the complicated legalese of traffic law and representing its requirements and flow. The approach (de)constructs road rules in legal terminology and specifies them in structured English logic that is expressed as Boolean logic for automation and Lawmaps for visualisation. We demonstrate an example using these tools leading to the construction and validation of a Bayesian Network model. We strongly believe these tools to be approachable by programmers and the general public, and capable of use in developing Artificial Intelligence to underpin motor vehicle smart systems, and in validation to ensure these systems are considerate of the law when making decisions.

15:55 - 16:20

small_logo

Jonathan Protzenko

Microsoft Research

Law & AI - External talk: Programming Languages for the Law.

In recent years, there has been a flurry of interest in expressing fragments of the law using programming languages. Consider tax codes, retirement pensions, family benefits, or payroll computations: these are all algorithms in disguise! However, faithfully describing these legal concepts in code is a delicate exercise: programmers and lawyers do not think alike, and few people are both legal and programming experts.

In this talk, I will present Catala, a new programming language that is specifically designed to bring programmers and lawyers together. Catala allows the programmer to naturally transcribe the law in a formal language; because Catala follows the thought process of legal minds, lawyers can audit and understand the code.

16:20 - 16:45

small_logo

Burkhard Schafer

The University of Edinburgh

Law & AI - General discussion

17:00 -

small_logo

Everybody

Drinks and Dinner

Day 2 - Industrial Panel and discussions

10:30 - 11:00

small_logo

Everybody

[f2f & on-line]: Coffee and general set up

11:00 - 12:00

small_logo

Everybody

[f2f & on-line]: Brief introductions by industrial partners

  • Garikayi Madzudzo (HORIBA MIRA)
  • Gudmund Grov (Norwegian Defence Research Establishment (FFI))
  • Alan Walker (Syselek)
  • Peter Davies (Thales)

12:00-12:20

small_logo

Remi Desmartin

Heriot-Watt University

[f2f & on-line]: MSc Student talk - Neural Network verification with Imandra

Experience of Industrial collaboration at postgraduate level.

12:30 - 13:30

[f2f]: Lunch & socialising

13:30 - 15:00

small_logo

Everybody

[f2f & online]: Panel discussion of problems/topics related to AI verification, security and law

Discussion panel 1, 13.30-14.15: What new tools are needed to make verification of AI accessible?
Moderator: Bob Atkey.
Panelists: David Aspinall (Edinburgh University), Chih-Hong Cheng (Fraunhofer IKS), Peter Davies (Thales), Ram Ramamoorthy (Edinburgh University), Garikayi Madzudzu (HORIBA MIRA)
Discussion panel 2, 14.15-15.00: Do new actors need new laws?
Moderator: Burkhard Schafer.
Panelists: Vaishak Belle (Edinburgh University), Alan Walker (Syselek), Jonathan Protzenko (Microsoft Research), Peter Davies (Thales)

15:00 -

small_logo

Everybody

[f2f & online]: Coffee and socialising