LRMI at #DCMI16 Metadata Summit, Copenhagen

I was in Copenhagen last week, at the Dublin Core Metadata Initiative 2016 conference, where I ran a workshop entitled “Building on Schema.org to describe learning resources” (as one of my colleagues pointed out, thinking of the snappy title never quite happened). Here’s a quick overview of it. There were three broad parts to the workshop: presentations … Continue reading LRMI at #DCMI16 Metadata Summit, Copenhagen

I was in Copenhagen last week, at the Dublin Core Metadata Initiative 2016 conference, where I ran a workshop entitled “Building on Schema.org to describe learning resources” (as one of my colleagues pointed out, thinking of the snappy title never quite happened). Here’s a quick overview of it.

There were three broad parts to the workshop: presentations on the background organisations and technology; presentations on how LRMI is being used; and a workshop where attendees got to think about what could be next for LRMI.

Fundamentals of Schema.org and LRMI

An introduction to Schema.org (Richard Wallis)

A brief history of Schema.org, fast becoming a de facto vocabulary for structured web data for sharing with search engines and others to understand interpret and load into their knowledge graphs. Whist addressing the issue of simple structured markup across the web it is also through its extension capabilities facilitating the development of sector specific enhancement that will be widely understood.

An Introduction to LRMI (Phil Barker)

A short introduction to the Learning Resource Metadata Initiative, originally a project which developed a common metadata framework for describing learning resources on the web. LRMI metadata terms have been added to Schema.org. The task group currently works to support those terms as a part of Schema.org and as a DCMI community specification.

[

Use of LRMI

Overview of LRMI in the wild  (Phil Barker)

The results of a series of case studies looking at initial implementations are summarised, showing that LRMI metadata is used in various ways not all of which are visible to the outside worlds. Estimates of how many organisations are using LRMI properties in publicly available websites and pages, and some examples are shown.

The Learning Registry and LRMI (Steve Midgley)

The learning registry is a new approach to capturing, connecting and sharing data about learning resources available online with the goal of making it easier for educators and students to access the rich content available in our ever-expanding digital universe. This presentation will explain what the Learning Registry is, how it is used and how it used LRMI / Schema.org metadata. This will include what has been learned about structuring, validating and sharing LRMI resources, including expressing alignments to learning standards, validation of json-ld and json-schema.

[On the day we failed to connect to Steve via skype, but here are his slides that we missed]

What next for LRMI?

I presented an overview of nine ideas that LRMI could prioritise for future work. These ideas were the basis for a balloon debate, which I will summarise in more detail in my next post.

 

 

Schema course extension update

This progress update on the work to extend schema.org to support the discovery of any type of educational course is cross-posted from the Schema Course Extension W3C Community Group. If you are interested in this work please head over there. What aspects of a course can we now describe? As a result of work so far addressing … Continue reading Schema course extension update

This progress update on the work to extend schema.org to support the discovery of any type of educational course is cross-posted from the Schema Course Extension W3C Community Group. If you are interested in this work please head over there.

What aspects of a course can we now describe?
As a result of work so far addressing the use cases that we outlined, we now have answers to the following questions about how to describe courses using schema.org:

As with anything in schema.org, many of the answers proposed are not the final word on all the detail required in every case, but they form a solid basis that I think will be adequate in many instances.

What new properties are we proposing?
In short, remarkably few. Many of the aspects of a course can be described in the same way as for other creative works or events. However we did find that we needed to create two new types Course and CourseInstance to identify whether the description related to a course that could be offered at various times or a specific offering or section of that course. We also found the need for three new properties for Course: courseCode, coursePrerequisites and hasCourseInstance; and two new properties for CourseInstance: courseMode and instructor.

There are others under discussion, but I highlight these as proposed because they are being put forward for inclusion in the next release of the schema.org core vocabulary.

showing how Google will display information about courses in a search galleryMore good news:  the Google search gallery documentation for developers already includes information on how to provide the most basic info about Courses. This is where we are going 🙂

Sustainability and Open Education

  Last week I was on a panel at Edinburgh University’s Repository Fringe event discussing sustainability and OER. As part of this I was asked to talk for ten minutes on some aspect of the subject. I don’t think I said anything of startling originality, but I must start posting to this blog again, so here are the … Continue reading Sustainability and Open Education

 

Last week I was on a panel at Edinburgh University’s Repository Fringe event discussing sustainability and OER. As part of this I was asked to talk for ten minutes on some aspect of the subject. I don’t think I said anything of startling originality, but I must start posting to this blog again, so here are the notes I spoke from. The idea that I wanted to get over is that projects should be careful about what services they tried to set up, they (the services) should be suitable and sustainable, and in fact it might be best if they did the minimum that was necessary (which might mean not setting up a repository).

Between 2009 and 2012 Jisc and the HE Academy ran the UK Open Education Resources programme (UKOER), spending approximately £15M of Hefce funding in three phases. There were 65 projects, some with personal, institutional or discipline scope releasing resources openly, some with a remit of promoting dissemination or discoverability, and  there were some related activities and services providing technical, legal, policy support, & there was Jorum: there was a mandate that OERs released through the project should be deposited in the Jorum repository. This was a time when open education was booming, as well as UKOER, funding from foundations in the US, notably Hewlett and Gates, was quite well established and EU funding was beginning. UKOER also, of course, built on previous Jisc programmes such as X4L, ReProduce, and the Repositories & preservation programme.

In many ways UKOER was a great success: a great number of resources were created or released, but also it established open education as a thing that people in UK HE talked about. It showed how to remove some of the blockers to the reuse and sharing of content for teaching and learning in HE (–especially in the use of standard CC licences with global scope rather than the vague, restrictive and expensive custom variations on  “available to other UK HEIs” of previous programmes). Helped by UKOER, many UK HEIs were well placed to explore the possibilities of MOOCs. And in general showed the potential to change how HEIs engage with the wider world and to help make best use of online learning–but it’s not just about opening exciting but vague possibilities: being a means to avoid problems such as restrictive licensing, and being in position to explore new possibilities, means avoiding unnecessary costs in the future and helps to make OER financially attractive (and that’s important to sustainability). Evidence of this success: even though UKOER was largely based on HEFCE funding, there are direct connections from UKOER to the University of Edinburgh’s Open Ed initiative and (less directly) to their engagement with MOOCs.

But I am here to talk sustainability. You probably know that Jorum, the repository in to which UKOER projects were required to deposit their OERs, is closing. Also, many of the discipline-based and discovery projects were based at HE Academy subject centres, which are now gone. At the recent OER16 here, Pat Lockley suggested that OER were no longer being created. He did this based on what he sees coming in to the Solvonauts aggregator that he develops and runs. Martin Poulter showed the graph, there is a fairly dramatic drop in the number of new deposits he sees. That suggests something is not being sustained.

But what?

Let’s distinguish between sustainability and persistence: sustainability suggests to me a manageable on-going effort. The content as released may be persistent, it may still be available as released (though without some sort of sustainable effort of editing, updating, preservation it may not be much use).  What else needs sustained effort? I would suggest: 1, the release of new content; 2, interest and community; 3, the services around the content (that includes repositories). I would say that UKOER did create a community interested in OER which is still pretty active. It could be larger, and less inward looking at times but for an academic community it doing quite well. New content is being released. But the services created by UKOER (and other OER initiatives) are dying. That, I think , is why Pat Lockley isn’t seeing new resources being published.

What is the lesson we should learn? Don’t create services to manage and disseminate your OERs that that require “project” level funding. Create the right services, don’t assume that what works for research outputs will work for educational resources, make sure that there is that “edit” button (or at least a make-your-own-editable-copy button).  Make the best use of what is available. Use everything that is available. Use wikimedia services, but also use flickr, wordpress, youtube, itunes, vimeo,—and you may well want to create your own service to act as a “junction” between all the different places you’re putting your OERs, linking with them via their APIs for deposit and discovery. This is the basic idea behind POSSE: Publish (on your) Own Site, Syndicate Elsewhere.

Schema course extension progress update

I am chair of the Schema Course Extension W3C Community Group, which aims to develop an extension for schema.org concerning the discovery of any type of educational course. This progress update is cross-posted from there. If the forming-storming-norming-performing model of group development still has any currency, then I am pretty sure that February was the … Continue reading Schema course extension progress update

I am chair of the Schema Course Extension W3C Community Group, which aims to develop an extension for schema.org concerning the discovery of any type of educational course. This progress update is cross-posted from there.

If the forming-storming-norming-performing model of group development still has any currency, then I am pretty sure that February was the “storming” phase. There was a lot of discussion, much of it around the modelling of the basic entities for describing courses and how they relate to core types in schema (the Modelling Course and CourseOffering & Course, a new dawn? threads). Pleased to say that the discussion did its job, and we achieved some sort of consensus (norming) around modelling courses in two parts

Course, a subtype of CreativeWork: A description of an educational course which may be offered as distinct instances at different times and places, or through different media or modes of study. An educational course is a sequence of one or more educational events and/or creative works which aims to build knowledge, competence or ability of learners.

CourseInstance, a subtype of Event: An instance of a Course offered at a specific time and place or through specific media or mode of study or to a specific section of students.

hasCourseInstance, a property of Course with expected range CourseInstance: An offering of the course at a specific time and place or through specific media or mode of study or to a specific section of students.

(see Modelling Course and CourseInstance on the group wiki)

This modelling, especially the subtyping from existing schema.org types allows us to meet many of the requirements arising from the use cases quite simply. For example, the cost of a course instance can be provided using the offers property of schema.org/Event.

The wiki is working to a reasonable extent as a place to record the outcomes of the discussion. Working from the outline use cases page you can see which requirements have pages, and those pages that exist point to the relevant discussion threads in the mail list and, where we have got this far, describe the current solution.  The wiki is also the place to find examples for testing whether the proposed solution can be used to mark up real course information.

As well as the wiki, we have the proposal on github, which can be used to build working test instances on appspot showing the proposed changes to the schema.org site.

The next phase of the work should see us performing, working through the requirements from the use cases and showing how they can be me. I think we should focus first on those that look easy to do with existing properties of schema.org/Event and schema.org/CreativeWork.

Why is there no LearningResource type in schema.org?

A couple of times in the last month or so the question of why isn’t there a LearningResource type in schema.org as a subtype of CreativeWork. In case it comes up again, here’s my answer. We took a deliberate decision way back at the start of LRMI not to define a LearningResource as a subtype … Continue reading Why is there no LearningResource type in schema.org?

A couple of times in the last month or so the question of why isn’t there a LearningResource type in schema.org as a subtype of CreativeWork. In case it comes up again, here’s my answer.

We took a deliberate decision way back at the start of LRMI not to define a LearningResource as a subtype of CreativeWork. Essentially the problem comes when you try to define what is a Learning Resource. Everyone who has tried so far has come up with something like “a resource which is used in learning, education or training”. That doesn’t rule out anything. Whether a magazine like Germany’s Spiegel is a learning resource depends on whether you are a German speaker or an American studying German. In presentations I have compared this problem to that of defining “what is a seat”. You can get seats in all shapes and forms with many different characteristics: chairs, sofas, saddles, stools; so in the end you just have to say a seat is something you sit on. Rather than rehash the problem of deciding what is and isn’t a learning resource, we took the approach of providing a way by which people can describe the educational properties of any Creative Work.

We recognised that there are some “types” of resource that are specific for learning. You can sensibly talk about textbooks and instructional videos as being are qualitatively different to novels and the movies people watch in the cinema, without denying that novels and movies are useful in education. That’s why we have the learningResourceType property. You can think of this as describing the educational genre of the resource.

In practice there are two choices for searching for learning resources. You can search those sites that are curated collections of what someone has decided are educational resources. Or you can search for the educational properties you want. So in our attempt at creating a Google Custom Search Engine we looked for the AlignmentObject. Looking for the presence of a learningResourceType would be another way. The educationalUse property should likewise be a good indicator.

HECoS, a new subject coding system for Higher Education

You may have missed that just before Christmas HECoS (the Higher Education Classification of Subjects) was announced. I worked a little on the project that lead up to this, along with colleagues in Cetis (who lead the project), Alan Paull Serices and Gill Ferrell, so I am especially pleased to see it come to fruition. … Continue reading HECoS, a new subject coding system for Higher Education

You may have missed that just before Christmas HECoS (the Higher Education Classification of Subjects) was announced. I worked a little on the project that lead up to this, along with colleagues in Cetis (who lead the project), Alan Paull Serices and Gill Ferrell, so I am especially pleased to see it come to fruition. I believe that as a flexible classification scheme built on semantic web / linked data principles it is a significant contribution to how we share data in HE.

HECoS was commissioned as part of the Higher Education Data & Information Improvement Programme (HEDIIP) in order to find a replacement to JACS, the subject coding scheme currently used in UK HE when information from different institutions needs to be classified by subject. When I was first approached by Gill Ferrell while she was working on a preliminary study of to determine if it needed changing, my initial response was that something which was much more in tune with semantic web principles would be very welcome (see the second part of this post that I wrote back in 2013). HECoS has been designed from the outset to be semantic web friendly. Also, one of the issues identified by the initial study was that aggregation of subjects was politically sensitive. For starters, the level of funding can depend on whether a subject is, for example, a STEM subject or not; but there are also factors of how universities as institutions are organised into departments/faculties/schools and how academics identify with disciplines. These lead to unnecessary difficulties in subject classification of courses: it is easy enough to decide whether a course is about ‘actuarial science’ but deciding whether ‘actuarial science’ should be grouped under ‘business studies’ or ‘mathematics’ is strongly context dependent. One of the decisions taken in designing HECoS was to separate the politics of how to aggregate subjects from the descriptions of those subjects and their more general relationships to each other. This is in marked contrast to JACS where the aggregation was baked into the very identifiers used. That is not to say that aggregation hierarchies aren’t important or won’t exist: they are, and they will, indeed there is already one for the purpose of displaying subjects for navigation, but they will be created through a governance process that can consider the politics involved separately from describing the subjects. This should make the subject classification terms more widely usable, allowing institutions and agencies who use it to build hierarchies for presentation and analysis that meet their own needs if these are different from those represented by the process responsible for the standard hierarchy. A more widely used classification scheme will have benefits for the information improvement envisaged by HEDIIP.

The next phase of HECoS will be about implementation and adoption, for example the creation of the governance processes detailed in the reports, moving HECoS up to proper 5-star linked data, help with migration from JACS to HECoS and so on. There’s a useful summary report on the HEDIIP site, and a spreadsheet of the coding system itself. There’s also still the development version Cetis used for consultation, which better represents its semantic webbiness but is non-definitive and temporary.

schema for courses

UPDATE: there is a new W3C community group schema course extend set up to progress these ideas. Please join if you are interested. This is essentially an invite to get involved with building a schema extension for educational courses, by way of a description of work so far. If you want to reply it’s sent … Continue reading schema for courses

UPDATE: there is a new W3C community group schema course extend set up to progress these ideas. Please join if you are interested.

This is essentially an invite to get involved with building a schema extension for educational courses, by way of a description of work so far. If you want to reply it’s sent as an email schema.org mail list.

About a year ago there was a flurry of discussion about wanting to markup descriptions of courses in schema. Vicky Tardiff-Holland produced a proposal which we discussed in LRMI and elsewhere as a result of which various suggestions were and comments were added to that proposal.

I also led some work in LRMI around scope, use cases, requirements, existing data; which I hope will lead to validating/refining the proposal by some example data that could be used to demonstrate that it met the use cases.

I am up for another push on courses. I share the doc I was working on in the hope that it is good starting point. It’s a bit long, so here is an overview of what it contains:

  • scope: concerning discovery of any type of educational course (online/offline, long/short, scheduled/on-demand) Educational course defined as “some sequence of events and/or creative works which aims to build knowledge, competence or ability of learners”. (out of scope: information about students and their progression etc; information needed internally for course management rather than discovery)
  • comparators: a review of some established ways of sharing similar data
  • use cases
  • requirements arising from the use cases
  • mapping to some existing examples. I used hypothes.is to annotate existing web pages that describe different types of course, e.g. from Coursera or a University, tagging the requirement that the data was relevant to. Here’s an example of a page as tagged (click on a yellow highlight to show the relevant requirement as a comment with a tag)
    hypothes.is aggregates the selected information for each tag, to give a list of the information relevant to each use case, for example cost

I think the next step would be to review the use cases and requirements in light of some of the observations from the mapping, and to look again at the proposal to see how it reflects the data available/required. But first I want to try to get more people involved, see whether anyone has a better idea for how to progress, or if anyone wants to check the work so far and help move it forward.

Finally, I’m aware the docs and discussions so far around schema for courses are a scattered set of scraps and drafts. If there is enough interest it would be really useful to have it in one place.

On the first day of Christmas

Prompted by On the second day of Christmas, my true love sent to me: Anscombe’s quartet https://t.co/0olyAiVaBY — Judy Robertson (@JudyRobertsonUK) December 2, 2015 and with apologies: On the first day of Christmas My true love gave to me A testable hypoth-e-sis On the second day of Christmas My truelove gave to me Two sample … Continue reading On the first day of Christmas

Prompted by

and with apologies:

On the first day of Christmas
My true love gave to me
A testable hypoth-e-sis

On the second day of Christmas
My truelove gave to me
Two sample means
And a testable hypothesis

On the third day of Christmas
My true love gave to me
Three peer reviews
Two sample means
And a testable hypothesis

On the fourth day of Christmas
My true love gave to me
Four scatter plots
Three peer reviews
Two sample means
And a testable hypothesis

On the fifth day of Christmas
My true love gave to me
FIIIVE SIGMAA RuuuuLE

(I always thought the carol went down hill from there)

A library shaped black hole in the web?

A library shaped black hole in the web? was the name of an OCLC event that was getting its second(?) run in Edinburgh last week, looking at how libraries can contribute to the web, using new technologies (for example linked data) to “re-envision, expose and share library data as entities (work, people, places, etc.) and … Continue reading A library shaped black hole in the web?

A library shaped black hole in the web? was the name of an OCLC event that was getting its second(?) run in Edinburgh last week, looking at how libraries can contribute to the web, using new technologies (for example linked data) to “re-envision, expose and share library data as entities (work, people, places, etc.) and what this means.”

Aside: to suggest that libraries act as a black hole in the web is quite a strong statement, you see black holes suck in information and at the very least mangle it, if not destroy it completely. Perhaps only a former physicist would read the title that way :-)

We were promised that we would:

learn how entity-based descriptions of library data – powered by linked data – will create new approaches to cataloguing, resource sharing and discovery. We will look at how referencing library data as entities, in Web friendly formats, enables data relationships to be rendered useful in many more contexts increasing the relevance of libraries within the wider information ecosystem.

which I wouldn’t quibble with. Here’s a summary of what I did take from the day.

Owen Stephens got us started with an introduction to the basic RDF model of triples building in to a graph, pointing out that the basic services required to start doing this are not already available to libraries. So if the statement you wish to make is about the authorship of a book, you need URIs to identify the book, the person and the “has creator” relationship: these first two of these are provided by, for example, the Library of Congress Authorities linked data service, the third by Dublin Core (among others). But Owen stressed that the linked data approach was more than another view of the same data because other people can make statements about your data. Owen drew on the distinction made in the Semantic Web community between “open world” and “closed world” approaches to illustrate how this can change your view of data. The library-catalogue-as-inventory is treated “closed world”, that is that all the relevant information could be assumed to be there, so if you don’t have information about a book in your inventory then you infer that you don’t have the book. In an open world, however, someone else might have information that would change that inference, so in an open world approach to using data you wouldn’t take lack of information about something to mean that the thing in question did not exist. The advantage of working in an open world is that further information is always being added by others from other fields, so the catalogue-as-information-source can be just one source of data for a web that goes beyond bibliographic data. Owen gave an example of this from Early English Books, where data extracted from the colophons about the booksellers who had commissioned the printing of each book had been linked to data from historical research on these book sellers (their locations and dates of operation) which greatly enhances the value of the library catalogue data for researchers into the history of publishing. We’ll come back to this theme of enhancing the value of the library catalogue for others.

Owen has a more complete summary of his presentation available.

Neil Jefferies of the Bodleian library built on what Owen had been discussing. He identified the core interest of the library as the intellectual content of the books, letting archives and museums deal with the book as an object, and he mentioned the hierarchical nature of intellectual content: data -> facts -> information -> knowledge. He also added that the libraries key strengths of the library are expertise in retention and search, and access to the physical originals. technology thought has shifted what the library may achieve, so that it should be about creating knowledge not just holding data or sharing information. He went on to give more examples of projects showing libraries using linked data to facilitate knowledge creation than I could manage to take notes on, but a among the highlights was: LD4L, Linked data for libraries, a $999k Mellon funded project involving Cornell, Harvard and Standford, which aims to create a “Scholarly Resource Semantic Information Store” which works both within individual institutions and links to other domains. the aim is to build this with OSS, and Neil mentioned the VIVO platform and community as an example of this. Neil also spoke about the richness required in order to model all the information relevant to knowledge in the library. CAMELOT is the data model used for knowledge held at the Bodleian, it includes a lot of provenance and contextual modelling: linked data is about assertions and you need context and provenance to be able to judge the truth of these (here’s a consequence of the open world nature of linked data, do you know where your data came from? do you know the assumptions made when creating it?). BIBFRAME, or MARC in RDF is not enough, it holds on to the idea of central authority of the catalogue(-as-inventory), and in linked data authority is more diffuse. The data model for LD4L will likely include BIBFRAME, FaBIO, VIVO-ISF, OpenAnnotation, PAV, OAI-ORE, SKOS, VIAF, ORCHID, ISNI, OCLC Works, circulation, citation and usage data, and will likely need a deal of entity reconciliation to deal with many people talking about the same thing.

So much for the idea and the promise of linked data for libraries. I would next like to describe a trio of talks that dealt with the question “what is to be done?”

Cathy Dolbear from Oxford University Press spoke about providing semantic and bibliographic data for libraries. The OUP provide metadata in a lot of different ways, varying from the venerable OAI-PMH (which seems to have little uptake) to RDFa embedded in product web pages (which may soon become JSON-LD). And yet most people find OUP content via direct links and search engines, a spot sample of one day’s referrers showed library discovery services accounted for ~1% of the hits. Cathy stressed that there were patches where library discovery services were more significant, but on the whole it was hard to see library use. Internally OUP have their own schema, OxMetaML, and are moving to a more graph-based approach; the transform this to the standard used by discovery services, e.g. HighWire, PRISM, JATS, PubMED etc. Cathy seemed to want to find ways that OUP metadata could be used to support the endeavours of libraries to use linked data as described above, but wanted to know that if she published Linked Data how would it be used–OUP can only spend money on doing things they know people are going to use, but it is hard to see who is using linked data. I got the strong impression that Cathy knew the ideas, and was aware of the project work being done with linked data, but her key point was that OUP need more info from libraries about what data is needed for real-world service delivery before they can be sure whether it’s worth creating & delivering metadata.

Ken Chad spoke about Linked data why care and what do we do describing the current status of linked data in terms of chasms in the technology adoption lifecycle and troughs of disillusionment in the hype cycle, both of which echo Cathy’s question about how do we get beyond interesting projects and to real world service delivery. In my own mind this is key. The initial draft of RDF is about 18 years old. The “linked data” reboot is about 9 years old. When do we stop talking about early adopters and decide we’ve got all the adopters we’re going to get? Or at least if we want more adopters we need a radically different approach. Ken spoke about approaching the problem in terms of the Jobs to be Done–the link to Ken’s presentation above describes that approach–which I have no problem with,  and I certainly would agree with Ken’s suggestion that the job to be done is to “design a library website that helps students focus less on finding and more on studying”. However, I do think there is an extra layer to this problem in that it requires other people to provide things you can link to. Buying a phone won’t help get a job done if you’re the only person with a phone.

Gill Hamilton of the National Library of Scotland to the theme of how to be ready for linked data even if you’re not convinced. This appealed to me. She gave three top tips: (1) following google, think of things not strings and record URIs not names; (2) you probably need a rich and  detailed schema for your own specialised uses of the data, don’t dumb this down to a generic ontology but publish it and map to the generic; (3) concentrate on what you have that’s unique and let other people handle the generic. To these Gill added three lesser tips: license your metadata as CC0, demand better systems and use open vocabularies.

Richard Wallis gave the final presentation ” the web of data is our oyster” which he stated by describing a view of the development of the web from a web of documents to a web of dynamic documents, to a web of information discovery to a web of data to a web of knowledge (with knowledge graphs and data mining). He suggested that libraries were engaged at the start, but became disengage, maybe even hostile, at the point of the  the web of discovery. One change that libraries had missed through this was the move from records (by definition, relating to the past) to living descriptions in terms of entities and relationships.  This he suggested had meant that many library projects on sharing data had lead to “linked data silos” which search engines cannot get into. The current approach to giving search engines access to entity and relationship data is schema.org, and Richard described his own work on extending schema for bibliographic data. Echoing Gill’s second tip he stressed that this was not intended to be an appropriate way to meet the libraries metadata needs, or even the way that libraries should use the web to share data between themselves, but it is a way that libraries can share their data with the web (of discovery, of data, of knowledge) as whole.

All in all, a good day. Nothing spectacularly new, but useful to see it all lined up and presented so coherently. Many thanks to Owen, Neil, Cathy, Ken, Gill and Richard, and to OCLC for arranging the event.