Research

NAR Database Paper

The new year started with a new publication, an article in the 2018 NAR Database issue about the IUPHAR Guide to Pharmacology Database.

My involvement came from Liam Bruce’s honours project. Liam developed the RDB2RDF mappings that convert the existing relational content into an RDF representation. The mappings are executed using the Morph-RDB R2RML engine.

To ensure that we abide by the FAIR data principles, we also generate machine processable metadata descriptions of the data that conform to the HCLS Community Profile.

Below is an altmetric donut so you can see what people are saying about the paper.

ISWC2017 Papers

I have had two papers accepted within the events that make up ISWC2017.

My PhD student Qianru Zhou has been working on using RDF stream processing to detect anomalous events through telecommunication network messages. The particular scenario in our paper that will be presented at the Web Stream Processing workshop focuses on detecting a disaster such as the capsizing of the Eastern Star on the Yangtze River [1].

The second paper is a poster in the main conference that provides an overview of the Bioschemas project where we are identifying the Schema.org markup that is of primary importance for life science resources. Hopefully the paper title will pull the punters in for the session [2].

[1] Qianru Zhou, Stephen McLaughlin, Alasdair J. G. Gray, Shangbin Wu, and Chengxiang Wang. Lost Silence: An emergency response early detection service through continuous processing of telecommunication data streams. In Web Stream Processing 2017, Vienna, Austria, oct 2017.
[Bibtex]
@InProceedings{ZhouEtal2017:LostSilence:WSP2017,
abstract = {Early detection of significant traumatic events, e.g. terrorist events, ship capsizes, is important to ensure that a prompt emergency response can occur. In the modern world telecommunication systems can and do play a key role in ensuring a successful emergency response by detecting such incidents through significant changes in calls and access to the networks. In this paper a methodology is illustrated to detect such incidents immediately (with the delay in the order of milliseconds), by processing semantically annotated streams of data in cellular telecommunication systems. In our methodology, live information of phones' positions and status are encoded as RDF streams. We propose an algorithm that processes streams of RDF annotated telecommunication data to detect abnormality. Our approach is exemplified in the context of capsize of a passenger cruise ship but is readily translatable to other incidents. Our evaluation results show that with properly chosen window size, such incidents can be detected effectively.},
author = {Qianru Zhou and Stephen McLaughlin and Alasdair J G Gray and Shangbin Wu and Chengxiang Wang},
title = {Lost Silence: An emergency response early detection service through continuous processing of telecommunication data streams},
OPTcrossref = {},
OPTkey = {},
booktitle = {Web Stream Processing 2017},
year = {2017},
OPTeditor = {},
OPTvolume = {},
OPTnumber = {},
OPTseries = {},
OPTpages = {},
month = oct,
address = {Vienna, Austria},
OPTorganization = {},
OPTpublisher = {},
OPTnote = {},
url = {http://ceur-ws.org/Vol-1936/paper-03.pdf},
OPTannote = {}
}
[2] Unknown bibtex entry with key [grayetal2017:bioschemas:iswc2017]
[Bibtex]

SICSA Digital Humanities Event

On 24 August I attended the SICSA Digital Humanities event hosted at Strathclyde University. The event was organised by Martin Halvey and Frank Hopfgartner. The event brought together cultural heritage practitioners, and researchers from the humanities and computer science.

The day started off with a keynote from Lorna Hughes, Professor of Digital Humanities at the University of Glasgow. She highlighted that there is not a single definition for digital humanities (weblink presents a random definition from a set collected at another event). However, at the core, digital humanities consists of:

  • Digital content
  • Digital methods
  • Tools

The purpose of digitial humanities is to enable better and/or faster outputs as well as conceptualising new research questions.

Lorna showcased several projects that she has been involved with highlighting the issues that were faced before identifying a set of lessons learned and challenges going forward (see her blog and slideshare). She highlighted that only about 10% of content has been transformed into a digital form, and of that only 3% is openly available. Additionally, some artefacts have been digitised in multiple ways at different time points, and the differences in these digital forms tells a story about the object.

Lorna highlighted the following challenges:

  • Enabling better understanding of digital content
  • Developing underlying digital infrastructure
  • Supporting the use of open content
  • Enabling the community
  • Working with born-digital content.

The second part of the day saw us brainstorming ideas in groups. Two potential apps were outlined to support the public get more out of the cultural heritage environment around us.

An interesting panel discussion was had, focused around what you would do with a mythical £350m. It also involved locking up 3D scanners, at least until appropriate methodology and metadata was made available.

The day finished off with an interesting keynote from Daniela Petrelli, Sheffield Hallam University. This was an interesting talk focussing on the outputs of the EU meSch project. A holistic design approach on the visitor experience was proposed that encompassed interaction design, product design, and content design. See the below embedded video for an idea.

Summary

There are lots of opportunities for collaboration between digital humanities and computing. From my perspective, there are lots of interesting challenges around capturing data metadata, linking between datasets, and capturing provenance of workflows.

Throughout the day, various participants were tweeting with the #dhfest hashtag.

DUCS not LOD

The follow is an excerpt from a blog by Keir Winesmith, Head of Digital at the San Francisco Museum of Modern Art (@SFMOMAlab)

Linked Open Data may sound good and noble, but it’s the wrong way around. It is a truth universally acknowledged, that an organization in possession of good Data, must want it Open (and indeed, Linked).

Well, I call bullshit. Most cultural heritage organizations (like most organizations) are terrible at data. And most of those who are good at collecting it, very rarely use it effectively or strategically.

Instead of Linked Open Data (LOD), Keir argues for DUCS:

I propose an alternative anagram, and an alternative order of importance.

  • D. Data. Step one, collect the data that is most likely to help you and your organization make better decisions in the future. For example collection breadth, depth, accuracy, completeness, diversity, and relationships between objects and creators.
  • U. Utilise. Actually use the data to inform your decisions, and test your hypotheses, within the bounds of your mission.
  • C. Context. Provide context for your data, both internally and externally. What’s inside? How is represented? How complete is it? How accurate? How current? How was it gathered?
  • S. Share. Now you’re ready to share it! Share it with context. Share it with the communities that are included in it first, follow the cultural heritage strategy of “nothing about me, without me”. Reach out to the relevant students, scholars, teachers, artists, designers, anthropologists, technologists, and whomever could use it. Get behind it and keep it up to date.

I’m against LOD, if it doesn’t follow DUCS first.

If you’re going to do it, do it right.

Source: Against Linked Open Data – Keir Winesmith – Medium

An Identifier Scheme for the Digitising Scotland Project

The Digitising Scotland project is having the vital records of Scotland transcribed from images of the original handwritten civil registers . Linking the resulting dataset of 24 million vital records covering the lives of 18 million people is a major challenge requiring improved record linkage techniques. Discussions within the multidisciplinary, widely distributed Digitising Scotland project team have been hampered by the teams in each of the institutions using their own identification scheme. To enable fruitful discussions within the Digitising Scotland team, we required a mechanism for uniquely identifying each individual represented on the certificates. From the identifier it should be possible to determine the type of certificate and the role each person played. We have devised a protocol to generate for any individual on the certificate a unique identifier, without using a computer, by exploiting the National Records of Scotland’s registration districts. Importantly, the approach does not rely on the handwritten content of the certificates which reduces the risk of the content being misread resulting in an incorrect identifier. The resulting identifier scheme has improved the internal discussions within the project. This paper discusses the rationale behind the chosen identifier scheme, and presents the format of the different identifiers.

The work reported in the paper was supported by the British ESRC under grants ES/K00574X/1(Digitising Scotland) and ES/L007487/1 (Administrative Data Research Centre – Scotland).

My coauthors are:

  • Özgür Akgün, University of St Andrews
  • Ahamd Alsadeeqi, Heriot-Watt University
  • Peter Christen, Australian National University
  • Tom Dalton, University of St Andrews
  • Alan Dearle, University of St Andrews
  • Chris Dibben, University of Edinburgh
  • Eilidh Garret, University of Essex
  • Graham Kirby, University of St Andrews
  • Alice Reid, University of Cambridge
  • Lee Williamson, University of Edinburgh

The work reported in this talk is the result of the Digitising Scotland Raasay Retreat. Also at the retreat were:

  • Julia Jennings, University of Albany
  • Christine Jones
  • Diego Ramiro-Farinas, Centre for Human and Social Sciences (CCHS) of the Spanish National Research Council (CSIC)