Joint Symposium on Computational Models of Brain, Language and Reasoning

Heriot-Watt University, Thursday 26th November 1998

Organised by the Department of Computing & Electrical Engineering, Heriot-Watt University and Division of Informatics, University of Edinburgh

ULTRA logo ULTRA group
Useful Logics, Types, Rewriting,
and Applications
UKII logo UKII
UK Institute of Informatics,
UK
The Engineering and Physical Sciences Research Council (EPSRC)

Gödel's incompleteness result shows clearly the limitation of our ability to formalise and automate language and thought. Nevertheless, the effort to approach the boundary of the formalisable remains one of the most fascinating areas of research, and motivates some of the most important research on computational models of the brain, language and reasoning.

The Symposium addresses progress in the formalisation of brain and thought from two distinct but complementary aspects:

The Symposium will be given by the following distinguished speakers:

Professor N.G. de Bruijn
Professor Emeritus
Eindhoven University of Technology,
Department of Mathematics and Computing Science
PO Box 513, 5600 MB Eindhoven, The Netherlands
Professor Mark Steedman
Institute for Communicating and Collaborative Systems
Division of Informatics
The University of Edinburgh
Professor David Willshaw
Institute for Adaptive and Neural Computation
Division of Informatics
The University of Edinburgh

Programme

*** Coffee *** 10.30
N.G. de Bruijn A mathematical model for biological memory and consciousness 10.50
David Willshaw Models of Associative Memory 11.50
*** Buffet Lunch *** 12.30
Mark Steedman Grounding Grammar in Conceptual Representation 1.30
N.G. de Bruijn Formalizing the Mathematical Vernacular 2.30

Here is the list of participants at the symposium.

Here are some photos from the workshop.

Location

The Symposium will be held in Room 127 of the Department of Computing and Electrical Engineering on the main campus of Heriot-Watt University.

Abstracts


N.G. de Bruijn:A mathematical model for biological memory and consciousness

An important feature of the model is the distribution of associative memory tasks over a large number of local agents (like neurons or local neural networks) without the use of an addressing system. The idea is that at any moment t only a relatively small subset A(t) of the set of all agents is awake. These agents store the associations that are around at that moment. At a later moment t+p the intersection of the sets A(t) and A(t+p) can give answers to questions about what was recorded at time t. If the definition of the sets A(t) is generated by suitably parametrized random processes it can be guaranteed that such intersections are almost always non-empty. In this way the optimal capacity of the system can be seen as roughly proportional to the square root of the size of the system. Thinking of something like 10 billion agents we find that the capacity of the system can easily be about 10000 times bigger than the capacity of a single agent, and the system can be much more dependable than any single agent could ever be.

A vital part of what we call "consciousness" is related to the fact that the central processor of the brain has a very strong interaction with what was memorized during the last second. This can be understood by realizing that A(t) changes just slowly: if p is of the order of a second then the intersection of A(t) and A(t+p) can still be very big.

For the local agents the suggested model is the one of the "thinking soup", where associative information is recorded in a kind of proteins. But late in life and/or later in the evolution, their role might be taken over by neural networks that can operate very much faster.

Once we have a model for memory and consciousness, we can try to study several other features in relation to that model. In particular we have to consider the difference between consciousness and subconsciousness, we have to say what "thinking" is, what it means to have "emotions" and "instincts", what it means to have "free will". And we might try to relate all we know about human memory and about mental defects to particular features of the model.


David Willshaw: Models of Distributed Associative Memory

I work in the research area of Computational Neuroscience which is concerned with developing and analysing mathematical and computer simulation models for specific parts of the nervous system in relation to their development and their function. I will review two different types of network associative memory model. The first, based on my thesis work and more recently developed with Jay Buckingham and Bruce Graham, focusses on the properties of the Associative Net, a simple distributed memory model which can be used to function with high efficiency and may have relevance to the use of the hippocampus as an associative memory. The second is given as an illustration of how biological constraints can impinge on the design of the model and I will describe the model due to David Marr on the functioning of cerebellar cortex.


Mark Steedman:Grounding Grammar in Conceptual Representation

Semantic or conceptual representations of the situation of utterance are the most plausible source for the universal prelinguistic knowledge that children bring to bear on the problem of language acquisition, in order to derive syntactic categories including verb-complement patterns for the words of accompanying utterances in their language. This process is complicated by the fact that the context will typically support several propositions besides the relevant one, that the words of the language may be polysemous, that the input may be noisy, and so on. But the fact that the available propositions are limited to the relatively small number that are grounded in the child's limited but subtle understanding of the situation and its attentional state suggests that the problem of lexical language learning is solvable along lines laid out by Grimshaw 1994 and certain related computational models, give or take some details of the particular representations involved.

According to "lexicalized" theories of grammar like Categorial Grammar, LFG, HPSG, TAG, as well as certain versions of transformational generative grammar the lexicon is the repository of most of the information in the syntax of any given language. It follows that lexical learning of this kind can in principle account for a large part of syntactic learning, so long as we assume that the semantic interpretations for propositions embodied in these constructions are with some reliability available to the child via the corresponding conceptual representations.

This view leaves a number of questions unanswered, including that of how the unbounded constructions---relativization, coordinate structure, and certain related intonational phenomena---are acquired. The apparent existence of negative contraints on the operations standardly assumed in most theories is particularly problematic.

Under the view of these constructions taken in Combinatory Categorial Grammar (CCG), very little besides lexical learning is required in order for the child to fully master these constructions as well. Lexical categories are projected onto larger sentences by combinatory operations from which the constraints automatically follow. I shall argue that the particular operations that are implicated are required for certain kinds of non-linguistic composite action. There is a certain amount of evidence that they become available for that purpose just before language acquisition begins, and are available to some animals that lack any specifically linguistic faculty. Some developmental evidence from the literature and its implications for neural and/or computational mechanisms for language acquisition and language processing are considered.


N.G. de Bruijn:Formalizing the Mathematical Vernacular

Mathematical vernacular is the mixture of words and formulas that mathematicians speak and write. Traditionally, it has many imperfections, varying from subject to subject and from person to person, but it is not too difficult to repair these, and to turn the informal mathematical vernacular into a standardized formal one, to be called MV.

The talk will expose possibilities for a formal treatment. That is feasible since the intricacies of natural languages can be avoided. The reason is that mathematicians define the meaning of their words and sentences locally, without having to compare them to existing habits in the outside world. On the one hand, the grammar of MV is much simpler than the one of natural languages. The rules of MV are expressed in terms of three grammatical categories only: 'sentence', 'name' and 'substantive'. On the other hand, it is more complex, since the correctness (even the ordinary syntactic correctness) of mathematical statements depends on the context and on everything said before.

Since MV has a flavour of type-theoretical and natural deduction, it can be used in many ways. It seems to invite to erect a mathematical edifice in type-theoretical setting, but it can be used as well for a standard approach of set theory combined with classical logic.

MV might be useful as an input language for proof assistant systems, or as a lingua franca for communication between entirely different systems. Another area that might profit from MV is the teaching of mathematics, certainly in the sense that mathematics teachers get insight about the context structure of their vernacular, and how it is connected to natural deduction.


For more information, please contact Fairouz Kamareddine email: () Last Revised 1998-12-2