CLANN agenda:

The joint efforts of researchers in many areas have given many insights on how logic and neuroscience can relate: Boolean (binary) networks can compute logical connectives; binary threshold networks can simulate Finite Automata, and (universal) Turing machines can be simulated by neural networks with rational weights. Neural networks in their full potential are as powerful as analog computing.

The natural question that arises is how neural networks can cope with logical theories and calculi. The existing Neuro-Symbolic systems have originated from Cogntive science and Artificial Intelligence, and usually interpret semantics of first-order logic or first-order logic programs using boolean neural networks.

The current project aims at testing and evaluating the first Neuro-Symbolic networks capable of simulating algorithms that are basic for Computational Logic (as opposed to AI): first-order and higher-order unification, different forms of resolution and goal-oriented proof-search. For a detailed discussion of the motivation and the main difficulties we face, see ICNC'09 position paper: E.Komendantskaya. Neurons OR Symbols: Why Does OR Remain Exclusive? or the STP'09 talk.

We consider neural networks as an abstract model of computation, and do not seek to establish any particular relation with processes that occur in the brain. Also, quite opposite to cognitive approaches to neuro-symbolic networks, our primary concern is to evaluate the neuro-symbolic networks from the point of view of their applicability and use in computational logic.

The CLANN project aims at creating a Neural Network interpreter for first-order and higher-order logic programs. The ultimate goal is to establish results showing whether such interpreter can be useful for computational logic and whether it can lead to optimisations of existing methods in automated reasoning.

Apart from acheiving the goal above, the project aims at brininging together reserchers interested in computational logic and/or its semantics given by different kinds of networks, such as Kahn networks, Thomas' networks, cortex networks, kernel methods... Thus, we will try to create a functional neural network simulator in Coq; and investigate its properties. We plan to look into Kahn's networks (recently formalised in Coq by Christine Paulin-Mohring), possibly in comparison to the neural network interpreter we build.