I will discuss an idealised long-term vision of the future interactions between mathematicians and computer systems and the short-term steps that might be taken to make progress towards the vision.

We know that formalizing mathematics is useful. The present practise usually follows closely the style of the mathematical assistant. It would be better to follow the style of mathematics. In principle this mathematical style may change, but at present the situation is lopsided.

In this talk, I will look back at the time when we were working on automating the whole book of Landau in Automath. I will discuss the challanges we faced and how we overcame some of these challenges.

We examine Kopylov's dependent intersection type and its use in defining dependent records which are a basis for class theory. We also examine Nogin's new axioms for quotient types and their use in defining collections.

The talk will briefly mention applications of these ideas to validating distributed system components, and it will illustrate the Common Mathematics Library used by MetaPRL, Nuprl, JProver and other systems, comparing it to Automath's tree of knowledge.

The goal of the Automath system (developed at Eindhoven from 1967 onwards) was to develop a framework for expressing mathematical theories in a form that enables a computer to verify the correctness. The fundamental idea is that whatever is said correctly, is correct. There is no other norm for correctness.

Unless the language structure is called the logic, the Automath system itself contains no logic and no mathematical foundations. All material necessary for mathematics are to be explained in the books to be written in the language. The rules of the language do not go beyond the ideas of a typed lambda calculus, and it is in terms of such typing that the meaning of notions like "definition", "theorem", "proof", "axiom" have to be understood. A complete book is structured as a collection of nested blocks, where opening a block can be seen as the introduction of a typed variable. Such variables can represent mathematical objects or mathematical proofs, and the reference systems for those emotionally very different entities turn out to be completely similar. That was how the idea of "proofs as objects" got into the system from the start.

Having and interpreting this block structure is the only thing that could be called prerequisite logic. All the rest of logic, and the foundation of mathematics, is book material. Automath presents a novel way of treating logic and its relation to mathematics. The system can be handled in a very natural way, directly derived from our mathematical habits. It can therefore have an educational spin-off. The art of teaching mathematics to students can learn from the precise way in which we have to talk to a machine in this system without logical or mathematical prerequisites.

The Automath system never claimed to automate the invention of mathematics, not even to automate the construction of proofs of given theorems. The Automath correctness checker is just a careful reader of well-presented finished material.

Agda with the graphical interface Alfa is an interactive system for direct manipulation of proofs and programs. It is implementing Structured Type Theory which is a dependently typed functional language with modern language constructs such as pattern matching, parametrised modules and records.

Mathematical documents are usually well structured. They develop complex networks of concepts which are interrelated in various ways. To understand a certain mathematical theorem or example, the reader usually has to understand the environment in which it is stated. She has to understand the required definitions and the other theorems and lemmas that are eventually applied to explain it. Therefore, in different situations, different parts from a mathematical document are needed, which are frequently scattered over a large textbook. It will be shown in the talk, how Slicing Book Technology can support the reader in various such situations. This technology decomposes books into semantic units. Then the conditions under which these units can be reused are described by meta data. Automated inference procedures combine these meta data with declarative descriptions of intended documents and with information about the user to determine, how the document for the specific needs of the user should be composed. Based on this information, the document will be generated on the fly and delivered over the Web. This concept has been realized in the European project Trial-Solution (http://www.trial-solution.de) for several mathematical textbooks. We shall report on the experience gained in this project. Special attention will be paid to the implemented meta data system and how it is applied to combine content from different books for the benefit of the reader. Based on experience from the German project In2Math(http://www.uni-koblenz.de/ag-ki/PROJECTS/in2math/), an outlook will discuss the possibilities that arise when formalized mathematical objects are available as content descriptive meta data in Slicing Book Technology. Examples and demos are available from http://www.slicing.de/books/.

We define an extension of predicate logic, called Binding Logic, where variables can be bound in terms and in propositions. We introduce a notion of model for this logic and we prove a completeness theorem for it. This theorem is obtained as a consequence of the completeness theorem of predicate logic, by encoding this logic back into predicate logic using de Bruijn indices and explicit substitutions (Joint work with The're`se Hardin and Claude Kirchner).

The FOC research project is building a development environment for certified computer algebra. It offers a concrete syntax allowing to write programs, to declare properties and to prove them. The environment is organized as a hierarchy of packages, which can be extended by refinement, multiple inheritance and late binding. A formal specification of this hierarchy and of its tools has been done with the theorem prover Coq and has shown that non-controlled uses of these tools can lead to inconsistencies. The control is done by a static analysis of the FOC code, which serves also for the compilation to executable code on one side (via OCaml) and to a proof term (checked by Coq) on the other side. During the talk, I shall give an overview of FOC and I will focus on the dependecies management through the static analysis.

The main goal of the Theorema project (Supported by Austrian Forschungsfoerderungsfonds (FWF), project P10002-PHY.) is to deliver an integrated interactive environment which can assist the mathematician in all the phases of his scientific work: proving, computing and solving in various mathematical domains. The system is implemented on top of Mathematica, thus it is backed by the full algorithmic and computing power of the currently most popular computer algebra system, which is available on all the main computing platforms (Unix, Linux, Windows, and Apple). The current implementation is the result of several man-years of work by many people of the \tma\ group at RISC, under the direction of Bruno Buchberger. Until now, we already built into the system the main features concerning proving and computing, while the solving features are in the design phase. The main features of the system will be demonstrated live during this presentation.

The system interacts with the user in the language of predicate logic, which is the natural language for expressing mathematical properties and algorithms. Few intuitive commands allow the user to enter mathematical formulae (in natural two-dimensional notation) and to compose them into mathematical theories, and also to use some basic domains (numbers, tuples, sets) which are already provided in Theorema Moreover, the system provides the implementation of the powerfull concept of {\em functor}, which allows the build-up of sophisticated domains on top of simpler ones. The mathematician has the possibility to experiment with the algorithms expressed in this way by directly running them using the Theorema computing engine, and he can also study their formal properties (e.g. correctness) using the {\em provers} of Theorema. It is an unique feature of Theorema that these two phases of the mathematical activity can be performed in the same integrated system and using the same language.

Computation is performed under full control of the user, which means being able to trace the reason (definition, formula) for each computing step -- if necessary, but most important with full control of the {\em knowledge} which is used in the computing process. For instance, in a certain situation the mathematician wants to give to the symbols (*, +, 0, Successor) the axiomatic meaning as defined by induction over natural numbers, while in another situation the same symbols should be interpreted using the full power of the underlying computational engine (positional notation, long arithmetic, etc.)

Proving is done with specific methods for several mathematical domains: propositional logic, general predicate logic, induction over integers and over lists, set theory, boolean combinations of polynomial [in]equalities (using Groebner Bases), combinatorial summation (using Paule--Schorn--Zeilberger), and a novel technique for proving in higher-order logic with equality: PCS (proving--computing--solving), introduced by Buchberger.

Theorema departs from the methods mostly used in automatic provers today, because it uses a natural proving style: the formulae are expressed in their original form (two-dimensional, non-clausal), the inference steps are expressed in natural style and in human language, and -- most importantly -- the proving methods are similar to the ones which are used by the working mathematicians. Therefore, the user has the possibility to inspect and easily understand the proofs, to verify any inference, and to interact with the proof by modifying certain assumptions, etc.

The insight that foundational systems like first-order logic or set theory can be used to construct large parts of existing mathematics and formal reasoning is one of the deep mathematical insights. Unfortunately it has been used in the field of automated theorem proving as an argument to disregard the need for a diverse variety of approaches in the field. While design issues play a major role in the formation of mathematical concepts, the theorem proving community has largely neglected them. In this contribution we argue that this leads not only to problems at the human computer interaction end, but that it causes severe problems at the core of the systems, namely at their representation and reasoning capabilities. (joint work with Martin Pollet of Saarbruecken, Germany.)

In this talk I will survey the new opportunities for the dissemination of mathematical knowledge opening up by the Internet. It is plausible to assume that the way we publish mathematics will change radically in the next five years, and more generally that the way we do (conceive, develop, and verify) math. Of course, this development is not restricted to mathematics itself, but will also affect other well-conceptualized and highly structured areas like physics.

The trend towards high-quality Internet accessible math. is initiated by the availability of XML-based representation standards for mathematical formulae (MathML and OpenMath) together with corresponding browsers that allow to present formulae in LaTeX-quality, while retaining the flexibility of html.

The next step will inevitably follow: to represent the meaning of formulae, so that they can be transmitted to mathematical software systems like computer algebra systems, automated theorem provers, or proof presentation systems. The possibility of universal exchange of mathematical objects will radically change and de-centralize the way we work in mathematics, engineering and sciences.

In this talk, I want to discuss the infrastructure that is needed to conveniently and efficiently manipulate, visualize, and distribute mathematical knowledge on the basis of the OMDoc format (an extension of the OpenMath standard for the communication of mathematical objects) and the MBase system (a mathematical knowledge base).

The notion of rewrite rule gives rise to several concepts:

- - The rewrite relation: it has been extensively studied since the seventies and is central in the study and operationalisation of equational logic
- - the rewriting logic: introduced by José Meseguer at the beginning of the nineties, it generalises equational logic in forgetting about its symmetry
- - More recently we have introduced with Horatiu Cirstea the rewriting calculus that consider the rewrite arrow as an abstractor and where the explicit application of a rewrite rule to a term produces a set of results. This calculus generalises the lambda calculus and provides by its binding mechanism a powerful tool to model and prove.

This presentation will be based on joint works with Horatiu Cirstea and Luigi Liquori.

We present an approach to combine the Coq proof assistant, based on calculus of constructions, with the Elan deduction system, based on rewriting calculus. Our objective is to implement in Coq a class of decision procedures using rewriting techniques. The approach relies on a normalisation tactic in associative and commutative theories that generates a proof term in rewriting calculus, translated into a proof term in calculus of constructions, that can be checked by Coq to get the proof of the normalisation process.

It is well known that restricting induction to existential formulas yields primitive recursive arithmetic. The computational nature of theorems falls out here transparently from the structure of their proofs, without obstructing the theorem prover, human or automated. This talk surveys similarly transparent properties of proofs, in both first and higher order formalisms, which guarantee that the theorems proved are in feasible fragments of mathematics, such as poly-time and poly-space.

This talk will be a discussion of the conditions for a theorem prover or proof assistant to be both useful and trusted. The emphasis will be on the relationship between systems based on the calculus of constructions and HOL.

In sixties the opinion that mathematics cannot be practically formalized was well established. So the main achievement of de Bruijn, among many others, is the decision that the problem must be reconsider and probably positively solved. And I believe that 1967 will be treated by future historians of mathematics as a turning point.

It is not easy to say precisely what we learnt in the meantime, if we learnt anything at all. I believe that the most important results are:

- we need experiments with much more advanced mathematics than already done
- a system for practical formalization of mathematics probably will not be a simple system based on small number of primitive notions
- integration with a computer algebra systems may be necessary or at least a feasible system must have bigger computational power.

Still we should me more concerned with the original ideas of de Bruijn, the most important that eventually we have to be able to formalize new mathematical results, published in mathematical newspapers in this century.

In 1976, Appel and Haiken proposed what is believed to be the first correct proof of the so-called "four color theorem" ; a enhanced version of this proof has been given in 1995 by Robertson, Seymour and Thomas.

What is remarkable is that the first-order formulation of these proofs is too large for human reading and for conventional means of communication (paper, book, hard disk...). However, large parts of these proofs can be understood as computation steps ; it is therefore possible to describe these proofs in a language combining mathematical reasoning and a programming language.

Since Martin-Lof's type theories and their various extensions include a programming language, they are a natural candidate for formalizing the 4-color's proof. I will describe the effort performed to achieve such a formalization in the system Coq.

his talk presents "aut", a modern Automath checker. It is a straight-forward reimplementation of the Zandleven Automath checker from the seventies. It was implemented about five years ago in the programming language C. It knows both the AUT-68 and AUT-QE dialects of Automath. This program was written to restore a damaged version of Jutting's translation of Landau's "Grundlagen".

Some noticable features:

- It is fast. It is 30 times as fast as Mizar, which itself is 30 times as fast as Coq. On a 1GHz machine it will check the full Jutting formalization (about 750K of Automath source) in 0.5 seconds.
- Its implementation of lambda terms doesn't use named variables or de Bruijn indices (the two common approaches) but instead uses a directed graph representation. In this, bound variables are represented by pointers to a lambda.

The program has the feature that it can compile an Automath text into one very big "Automath Single Line" style lambda term. It outputs such a term using de Bruijn indices. (These lambda terms cannot be checked by modern systems like Coq or Agda, because the lambda-typed lambda calculi of de Bruijn are "incompatible" with the Pi-typed lambda calculi of modern type theory.)

The source of "aut" is freely available on the Web through

http://www.cs.kun.nl/~freek/aut/.

The Fundamental Theorem of Algebra states that every polynomial over the complex numbers has a root. In Nijmegen we have formalised a constructive proof of this theorem in Coq. In this project, we wanted to also set up a library of results (about reals and complex numbers and polynomials) that could be re-used, by us and by others. We have therefore defined an algebraic hierarchy of monoids, groups, rings and so forth that allows to prove generic results and use them for concrete instantiations.

In the talk I will briefly outline the FTA project. The main part will consist of an outline of the algebraic hierarchy and its use. This part will contain an explanation of the basic features of Coq.

Reference: H. Geuvers, R. Pollack, F. Wiedijk & J. Zwanenburg, "The algebraic hierarchy of the FTA project." In: Calculemus 2001 Proceedings, Siena, Italy, 13-27, 2001. http://www.cs.kun.nl/~freek/pubs/alghier.ps.gz.