Evening lectures
|
|
Evening Schedule
Various events take place in the evenings:
-
The EVENING LECTURES given by renowned figures in the field:
Mark Johnson, Mark Steedman, Jane Hillston and Peter Gardenfoers.
-
A student posters session.
-
The FOLLI general meeting.
Details and dates of these events can be found below.
First Week: 8 - 12th of August,
2005
|
Date: Tuesday, August 9th
|
|
Time: 21.00
Room: Lecture Theatre 4
Speaker: Mark Johnson
Title: Statistics and the scientific study of language
Abstract:
Since the "statistical revolution" in the mid-1990s,
statistical methods have dominated computational
linguistics. But what, if anything, do they have to do
with the scientific study of language? In this talk I'll
discuss the relationship between statistics, logic and
language, and what all this has to do with the scientific
study of language. Along the way I'll introduce examples
from parsing and learning.
|
Date: Thursday, August 11th
|
|
Time: 20.30
Title: FoLLI General Meeting.
Agenda: can be found here.
|
|
Time: 21.00
Room: Lecture Theatre 4
Speaker: Mark Steedman
Title: Grammar Acquisition by Child and Machine
Abstract:
The talk draws attention to some similarities between the
problems of inducing wide coverage grammars and
statistical models from treebanks on the one hand, and
child language learning on the other. Thus:
-
The only way that anyone has so far been able to
induce reasonably sound, wide coverage, adult-sized
grammars for realistic corpora, by machine, is via
Supervised Learning, based on Human-annotated data,
such as that in the Penn Wall Street Journal corpus.
-
The only way that anyone has been able to write
programs that parse accurately using grammars of that
size and ambiguity is by using statistical models
based on the same labeled data, such as
Head-Dependency models.
-
Such models work because they reflect a mixture of
semantic and knowledge-based information.
Likewise:
-
The only plausible source for the positive evidence
that the child brings to bear on the induction of
grammar from strings is access to meaning
representations.
-
The only plausible source for negative evidence is
statistical properties of the corpus the child is
exposed to
-
There is clear evidence that human sentence processing
relies on a model of semantic, inferential, and
pragmatic coherence for ambiguity resolution.
The talk argues that existing computational models of the
probabilistic acquisition of grammars and models from
meaning representations offer a simpler and less
stipulative account of child language acquisition than
standard psycholinguistic accounts. In particular, they
suggest that standard notions of "trigger" and "parameter
setting" (and their attendant homunculi) are redundant.
It also argues that models of child language acquisition
can inform the much harder task of semi-supervised
induction of wide-coverage parsers from large volume
unlabeled text.
|
Second Week: 15 - 19th of August, 2005
|
Date: Monday, August 15th
|
|
Time: 21.00-22.00
Title: Poster sessions
|
Date: Tuesday, August 16th
|
|
Time: 21.00
Room: Lecture Theatre 4
Speaker: Jane Hillston
Title: Getting performance out of process algebra
Abstract:
Process algebras are system description techniques
supported by apparatus for formal reasoning. When
extended with data about durations and probabilities they
can be used to derive quantitative as well as qualitative
properties of systems. PEPA is a stochastic process
algebra in which quantification has been added in such a
way as to allow quantitative reasoning to be carried out
in terms of an underlying Markov process.
In this talk I will discuss the design of the PEPA
language, the interplay between the process algebra and
the Markov process, and how properties of both can be
exploited when carrying out quantitative analysis.
|
Date: Thursday, August 18th
|
|
Time: 21.00
Room: Lecture Theatre 4
Speaker: Peter Gaerdenfors
Title: How to make the Semantic Web more Semantic
Abstract:
The Semantic Web is not semantic. It is good for
syllogistic reasoning, but there is much more to semantics
than syllogisms. I argue that the current Semantic Web is
too dependent on symbolic representations of information
structures, which limits its representational capacity. As
a remedy, I propose conceptual spaces as a tool
for expressing more of the semantics. Conceptual spaces
are built up from quality dimensions that have geometric
or topological structures. With the aid of the dimensions,
similarities between objects can easily be represented and
it is argued that similarity is a central aspect of
semantic content. By sorting the dimensions into domains,
I define properties and concepts and show how prototype
effects of concepts can be treated with the aid of
conceptual spaces. I present an outline of how one can
reconstruct most of the taxonomies and other meta-data
that are explicitly coded in the current Semantic Web and
argue that inference engines on the symbolic level will
become largely superfluous. As an example of the semantic
power of conceptual spaces, I show how concept
combinations can be analysed in a much richer and more
accurate way than in the classical logical approach.
|
|
|
|
|
|
|
© ESSLLI 2005 Organising Committee |
2005-08-10 | |