Title: Learning weighted automata
Abstract: Verifying the robustness of neural networks is a hot topic in Machine Learning at the moment. As is often common in active areas of research, there has been a proliferation in the number of definitions of robustness out there. However as of yet, the consequences of the differences in these definitions do not seem to be widely discussed in the literature.
In this talk I will compare four different definitions of robustness, and in particular look at: their motivation, mathematical properties, assumptions about the underlying distribution of data, and interpretability. In particular we highlight that if not used with care, the popular definition of classification robustness may result in a contradictory specification! This work is the result of the collaboration of various members of LAIV.