Summer of Good News

This has been a good summer, not just because the British weather has been somewhat more summery than usual. Qianru’s Graduation In June, my first PhD student graduated. Dr Qianru Zhou investigated the use of an ontology to enable a software defined network. Her PhD thesis is “Ontology-driven knowledge based autonomic management for telecommunication networks: Theory, […]

Picture of Qianru on her graduation day

This has been a good summer, not just because the British weather has been somewhat more summery than usual.

Qianru’s Graduation

In June, my first PhD student graduated. Dr Qianru Zhou investigated the use of an ontology to enable a software defined network. Her PhD thesis is “Ontology-driven knowledge based autonomic management for telecommunication networks: Theory, implementation and applications”,

Promotion

As of today (1 August 2018), I am now an Associate Professor (equivalent to Senior Lecturer in traditional British universities).

Grant Success

Today saw the start of a collaboration with VisionWare, a company based in Glasgow who specialise in record linkage, and is funded as an Interface Voucher. We are investigating combining the data corruption framework that Ahmad has been developing with the synthetic data that VisionWare have been generating. The purpose is to enable us to evaluate, and thus improve, record linkage.

 

SICSA Digital Humanities Event

On 24 August I attended the SICSA Digital Humanities event hosted at Strathclyde University. The event was organised by Martin Halvey and Frank Hopfgartner. The event brought together cultural heritage practitioners, and researchers from the humanities and computer science. The day started off with a keynote from Lorna Hughes, Professor of Digital Humanities at the […]

On 24 August I attended the SICSA Digital Humanities event hosted at Strathclyde University. The event was organised by Martin Halvey and Frank Hopfgartner. The event brought together cultural heritage practitioners, and researchers from the humanities and computer science.

The day started off with a keynote from Lorna Hughes, Professor of Digital Humanities at the University of Glasgow. She highlighted that there is not a single definition for digital humanities (weblink presents a random definition from a set collected at another event). However, at the core, digital humanities consists of:

  • Digital content
  • Digital methods
  • Tools

The purpose of digitial humanities is to enable better and/or faster outputs as well as conceptualising new research questions.

Lorna showcased several projects that she has been involved with highlighting the issues that were faced before identifying a set of lessons learned and challenges going forward (see her blog and slideshare). She highlighted that only about 10% of content has been transformed into a digital form, and of that only 3% is openly available. Additionally, some artefacts have been digitised in multiple ways at different time points, and the differences in these digital forms tells a story about the object.

Lorna highlighted the following challenges:

  • Enabling better understanding of digital content
  • Developing underlying digital infrastructure
  • Supporting the use of open content
  • Enabling the community
  • Working with born-digital content.

The second part of the day saw us brainstorming ideas in groups. Two potential apps were outlined to support the public get more out of the cultural heritage environment around us.

An interesting panel discussion was had, focused around what you would do with a mythical £350m. It also involved locking up 3D scanners, at least until appropriate methodology and metadata was made available.

The day finished off with an interesting keynote from Daniela Petrelli, Sheffield Hallam University. This was an interesting talk focussing on the outputs of the EU meSch project. A holistic design approach on the visitor experience was proposed that encompassed interaction design, product design, and content design. See the below embedded video for an idea.

Summary

There are lots of opportunities for collaboration between digital humanities and computing. From my perspective, there are lots of interesting challenges around capturing data metadata, linking between datasets, and capturing provenance of workflows.

Throughout the day, various participants were tweeting with the #dhfest hashtag.

An Identifier Scheme for the Digitising Scotland Project

The Digitising Scotland project is having the vital records of Scotland transcribed from images of the original handwritten civil registers . Linking the resulting dataset of 24 million vital records covering the lives of 18 million people is a major challenge requiring improved record linkage techniques. Discussions within the multidisciplinary, widely distributed Digitising Scotland project […]

The Digitising Scotland project is having the vital records of Scotland transcribed from images of the original handwritten civil registers . Linking the resulting dataset of 24 million vital records covering the lives of 18 million people is a major challenge requiring improved record linkage techniques. Discussions within the multidisciplinary, widely distributed Digitising Scotland project team have been hampered by the teams in each of the institutions using their own identification scheme. To enable fruitful discussions within the Digitising Scotland team, we required a mechanism for uniquely identifying each individual represented on the certificates. From the identifier it should be possible to determine the type of certificate and the role each person played. We have devised a protocol to generate for any individual on the certificate a unique identifier, without using a computer, by exploiting the National Records of Scotland’s registration districts. Importantly, the approach does not rely on the handwritten content of the certificates which reduces the risk of the content being misread resulting in an incorrect identifier. The resulting identifier scheme has improved the internal discussions within the project. This paper discusses the rationale behind the chosen identifier scheme, and presents the format of the different identifiers.

The work reported in the paper was supported by the British ESRC under grants ES/K00574X/1(Digitising Scotland) and ES/L007487/1 (Administrative Data Research Centre – Scotland).

My coauthors are:

  • Özgür Akgün, University of St Andrews
  • Ahamd Alsadeeqi, Heriot-Watt University
  • Peter Christen, Australian National University
  • Tom Dalton, University of St Andrews
  • Alan Dearle, University of St Andrews
  • Chris Dibben, University of Edinburgh
  • Eilidh Garret, University of Essex
  • Graham Kirby, University of St Andrews
  • Alice Reid, University of Cambridge
  • Lee Williamson, University of Edinburgh

The work reported in this talk is the result of the Digitising Scotland Raasay Retreat. Also at the retreat were:

  • Julia Jennings, University of Albany
  • Christine Jones
  • Diego Ramiro-Farinas, Centre for Human and Social Sciences (CCHS) of the Spanish National Research Council (CSIC)

Seminar: PhD Progression Talks

A double bill of PhD progression talks (abstracts below):

Venue: 3.07 Earl Mountbatten Building, Heriot-Watt University, Edinburgh

Time and Date: 11:15, 8 May 2017

Evaluating Record Linkage Techniques

Ahmad Alsadeeqi

Many computer algorithms have been developed to automatically link historical records based on a variety of string matching techniques. These generate an assessment of how likely two records are to be the same. However, it remains unclear how to assess the quality of the linkages computed due to the absence of absolute knowledge of the correct linkage of real historical records – the ground truth. The creation of synthetically generated datasets for which the ground truth linkage is known helps with the assessment of linkage algorithms but the data generated is too clean to be representative of historical records.

We are interested in assessing data linkage algorithms under different data quality scenarios, e.g. with errors typically introduced by a transcription process or where books can be nibbled by mice. We are developing a data corrupting model that injects corruptions into datasets based on given corruption methods and probabilities. We have classified different forms of corruptions found in historical records into four types based on the effect scope of the corruption. Those types are character level (e.g. an f is represented as an s – OCR Corruptions), attribute level (e.g. gender swap – male changed to female due to false entry), record level (e.g. missing records due to different reasons like loss of certificate), and group of records level (e.g. coffee spilt over a page, lost parish records in fire). This will give us the ability to evaluate record linkage algorithms over synthetically generated datasets with known ground truth and with data corruptions matching a given profile.

Computer-Aided Biomimetics: Knowledge Extraction

Ruben Kruiper

Biologically inspired design concerns copying ideas from nature to various other domains, e.g. natural computing. Biomimetics is a sub-field of biologically inspired design and focuses specifically on solving technical/engineering problems. Because engineers lack biological knowledge the process of biomimetics is non-trivial and remains adventitious. Therefore, computational tools have been developed that aim to support engineers during a biomimetics process by integrating large amounts of relevant biological knowledge. Existing tools work apply NLP techniques on biological research papers to build dedicated knowledge bases. However, these existing tools impose an engineering view on biological data. I will talk about the support that ‘Computer-Aided Biomimetics’ tools should provide, introducing a theoretical basis for further research on the appropriate computational techniques.

New Paper: Reproducibility with Administrative Data

Our journal article [1] looks at encouraging good practice to enable reproducible analysis of data analysis workflows. This is a result of a collaboration between social scientists and a computer scientist with the ADRC-Scotland. Abstract: Powerful new social science data resources are emerging. One particularly important source is administrative data, which were originally collected for organisational […]

Our journal article [1] looks at encouraging good practice to enable reproducible analysis of data analysis workflows. This is a result of a collaboration between social scientists and a computer scientist with the ADRC-Scotland.

Abstract: Powerful new social science data resources are emerging. One particularly important source is administrative data, which were originally collected for organisational purposes but often contain information that is suitable for social science research. In this paper we outline the concept of reproducible research in relation to micro-level administrative social science data. Our central claim is that a planned and organised workflow is essential for high quality research using micro-level administrative social science data. We argue that it is essential for researchers to share research code, because code sharing enables the elements of reproducible research. First, it enables results to be duplicated and therefore allows the accuracy and validity of analyses to be evaluated. Second, it facilitates further tests of the robustness of the original piece of research. Drawing on insights from computer science and other disciplines that have been engaged in e-Research we discuss and advocate the use of Git repositories to provide a useable and effective solution to research code sharing and rendering social science research using micro-level administrative data reproducible.

[1] [doi] C. J. Playford, V. Gayle, R. Connelly, and A. J. Gray, “Administrative social science data: The challenge of reproducible research,” Big Data & Society, vol. 3, iss. 2, 2016.
[Bibtex]
@Article{Playford2016BDS,
abstract = {Powerful new social science data resources are emerging. One particularly important source is administrative data, which were originally collected for organisational purposes but often contain information that is suitable for social science research. In this paper we outline the concept of reproducible research in relation to micro-level administrative social science data. Our central claim is that a planned and organised workflow is essential for high quality research using micro-level administrative social science data. We argue that it is essential for researchers to share research code, because code sharing enables the elements of reproducible research. First, it enables results to be duplicated and therefore allows the accuracy and validity of analyses to be evaluated. Second, it facilitates further tests of the robustness of the original piece of research. Drawing on insights from computer science and other disciplines that have been engaged in e-Research we discuss and advocate the use of Git repositories to provide a useable and effective solution to research code sharing and rendering social science research using micro-level administrative data reproducible.},
author = {Christopher J Playford and Vernon Gayle and Roxanne Connelly and Alasdair JG Gray},
title = {Administrative social science data: The challenge of reproducible research},
journal = {Big Data \& Society},
year = {2016},
OPTkey = {},
volume = {3},
number = {2},
OPTpages = {},
month = dec,
url = {http://journals.sagepub.com/doi/full/10.1177/2053951716684143},
doi = {10.1177/2053951716684143},
OPTnote = {},
OPTannote = {}
}

New Paper: Reproducibility with Administrative Data

Our journal article [1] looks at encouraging good practice to enable reproducible analysis of data analysis workflows. This is a result of a collaboration between social scientists and a computer scientist with the ADRC-Scotland. Abstract: Powerful new social science data resources are emerging. One particularly important source is administrative data, which were originally collected for organisational […]

Our journal article [1] looks at encouraging good practice to enable reproducible analysis of data analysis workflows. This is a result of a collaboration between social scientists and a computer scientist with the ADRC-Scotland.

Abstract: Powerful new social science data resources are emerging. One particularly important source is administrative data, which were originally collected for organisational purposes but often contain information that is suitable for social science research. In this paper we outline the concept of reproducible research in relation to micro-level administrative social science data. Our central claim is that a planned and organised workflow is essential for high quality research using micro-level administrative social science data. We argue that it is essential for researchers to share research code, because code sharing enables the elements of reproducible research. First, it enables results to be duplicated and therefore allows the accuracy and validity of analyses to be evaluated. Second, it facilitates further tests of the robustness of the original piece of research. Drawing on insights from computer science and other disciplines that have been engaged in e-Research we discuss and advocate the use of Git repositories to provide a useable and effective solution to research code sharing and rendering social science research using micro-level administrative data reproducible.

[1] [doi] C. J. Playford, V. Gayle, R. Connelly, and A. J. Gray, “Administrative social science data: The challenge of reproducible research,” Big Data & Society, vol. 3, iss. 2, 2016.
[Bibtex]
@Article{Playford2016BDS,
abstract = {Powerful new social science data resources are emerging. One particularly important source is administrative data, which were originally collected for organisational purposes but often contain information that is suitable for social science research. In this paper we outline the concept of reproducible research in relation to micro-level administrative social science data. Our central claim is that a planned and organised workflow is essential for high quality research using micro-level administrative social science data. We argue that it is essential for researchers to share research code, because code sharing enables the elements of reproducible research. First, it enables results to be duplicated and therefore allows the accuracy and validity of analyses to be evaluated. Second, it facilitates further tests of the robustness of the original piece of research. Drawing on insights from computer science and other disciplines that have been engaged in e-Research we discuss and advocate the use of Git repositories to provide a useable and effective solution to research code sharing and rendering social science research using micro-level administrative data reproducible.},
author = {Christopher J Playford and Vernon Gayle and Roxanne Connelly and Alasdair JG Gray},
title = {Administrative social science data: The challenge of reproducible research},
journal = {Big Data \& Society},
year = {2016},
OPTkey = {},
volume = {3},
number = {2},
OPTpages = {},
month = dec,
url = {http://journals.sagepub.com/doi/full/10.1177/2053951716684143},
doi = {10.1177/2053951716684143},
OPTnote = {},
OPTannote = {}
}

Celebrating 50 years of Computer Science at HWU

This year sees a double celebration in the Department of Computer Science at Heriot-Watt University – it is 50 years since we launched the first BSc Computer Science degree in Scotland, and 50 years since Heriot-Watt was granted university status. To celebrate we had a series of events last week including an open day and […]

Old hardware

Display of old equipment used within computer science.

This year sees a double celebration in the Department of Computer Science at Heriot-Watt University – it is 50 years since we launched the first BSc Computer Science degree in Scotland, and 50 years since Heriot-Watt was granted university status. To celebrate we had a series of events last week including an open day and dinner for former staff and students.

During the open day we had a variety of displays and activities to highlight the current research taking place in the department. There was a display of some of the old equipment that has been used in the department. While this mostly focused on storage mediums, it also included my first computer – a BBC model B. Admittedly there was a lot of games played on it in my youth.

Pepper robot

Demonstration of the Pepper robot that is being used by the Interaction lab to improve speech interactions.

Each of the labs in the department had displays, including the new Pepper robot in the Interaction Lab and one of the Nao robots from the Robotics Lab. The Interactive and Trustworthy Technologies Lab were demonstrating the interactive games they have developed to help with rehabilitation after falls and knee replacements. The Semantic Web Lab were demonstrating the difficulties of reconstructing a family tree using vital records information.

At the dinner in the evening we had two guest speakers. Alex Balfour, the first head of department and instigator of the degree programme, and Ian Ritchie, entrepreneur and former graduate. Both gave entertaining speeches reflecting their time in the department, and their experiences of the Mountbatten Building, now the Apex Hotel in the Grassmarket where we had the dinner.

See these pages for more about the history of computer science at Heriot-Watt.

Genealogy reconstruction game

Current PhD students attempting to reconstruct a family tree from their entries in the birth, marriage, and death records.

rehab-game

Game to help rehabilitation patients perform their physiotherapy exercises correctly.

Celebrating 50 years of Computer Science at HWU

This year sees a double celebration in the Department of Computer Science at Heriot-Watt University – it is 50 years since we launched the first BSc Computer Science degree in Scotland, and 50 years since Heriot-Watt was granted university status. To celebrate we had a series of events last week including an open day and […]

Old hardware

Display of old equipment used within computer science.

This year sees a double celebration in the Department of Computer Science at Heriot-Watt University – it is 50 years since we launched the first BSc Computer Science degree in Scotland, and 50 years since Heriot-Watt was granted university status. To celebrate we had a series of events last week including an open day and dinner for former staff and students.

During the open day we had a variety of displays and activities to highlight the current research taking place in the department. There was a display of some of the old equipment that has been used in the department. While this mostly focused on storage mediums, it also included my first computer – a BBC model B. Admittedly there was a lot of games played on it in my youth.

Pepper robot

Demonstration of the Pepper robot that is being used by the Interaction lab to improve speech interactions.

Each of the labs in the department had displays, including the new Pepper robot in the Interaction Lab and one of the Nao robots from the Robotics Lab. The Interactive and Trustworthy Technologies Lab were demonstrating the interactive games they have developed to help with rehabilitation after falls and knee replacements. The Semantic Web Lab were demonstrating the difficulties of reconstructing a family tree using vital records information.

At the dinner in the evening we had two guest speakers. Alex Balfour, the first head of department and instigator of the degree programme, and Ian Ritchie, entrepreneur and former graduate. Both gave entertaining speeches reflecting their time in the department, and their experiences of the Mountbatten Building, now the Apex Hotel in the Grassmarket where we had the dinner.

See these pages for more about the history of computer science at Heriot-Watt.

Genealogy reconstruction game

Current PhD students attempting to reconstruct a family tree from their entries in the birth, marriage, and death records.

rehab-game

Game to help rehabilitation patients perform their physiotherapy exercises correctly.

Will the real Kevin Macleod please line up?

Last week I attended the Digitising Scotland Project Colloquium at Raasay House (featured image above) on the Isle of Raasay. The colloquium was a gathering of historians and computer scientists to discuss the challenges of linking the vital records of the people of Scotland between 1851 and 1974. The Digitising Scotland Project is having the birth, marriage, […]

Last week I attended the Digitising Scotland Project Colloquium at Raasay House (featured image above) on the Isle of Raasay. The colloquium was a gathering of historians and computer scientists to discuss the challenges of linking the vital records of the people of Scotland between 1851 and 1974.

The Digitising Scotland Project is having the birth, marriage, and death records of Scotland transcribed from the scans of the original hand written registration books. This process is not without its own challenges, try reading this birth record of a famous Scottish artist and architect, but the focus of the colloquium was on what happens after the records have been transcribed.

Each Scottish vital record identifies several individuals, e.g. on a birth record you will have the baby, their parents, the informant, and the registrar. The same individuals will appear on multiple records relating to events in their own life, e.g. an individual will have a birth record, potentially one or more marriage records, and a death record, assuming that they have not emigrated. They can also appear in the records of other individuals, e.g. as a mother on a birth record, the mother-of-the-bride on a marriage record, or the doctor on a death record. The challenge is how to identify the same individual across all the records, when all you have is a name (first and last) and potentially the age.

The problem is compounded in an area like Skye, which was one of the focus regions of the Digitising Scotland project, because there is a relatively small distribution of names on which to draw upon. For example, a name like Kevin Macleod will appear on multiple records. In some cases the name will correspond to a single Kevin Macleod, in other cases it will be a closely related Kevin Macleod, e.g. Kevin Macleod the father of Kevin Macleod, and in others the two Kevin Macleods will not be related at all. The challenge is how to develop a computer algorithm that is capable of making these distinctions.

The colloquium was a great opportunity for historians and computer scientists to discuss the challenges and help each other to develop a solution. However, first we had to agree on a common understanding for terms such as “record” and “individual”.

Overall, we made great progress on exchanging ideas and techniques. We heard how similar challenges are being addressed in a related project focusing on North Orkney, how historians approach the record linkage challenge, and about work for automatically classifying causes of death to their ICD10 code and jobs to HISCO. There was also time to socialise and enjoy some of the scenery of Raasay, which is a beautiful island the size of Manhattan but with a population of only 160.

View from the meeting room

View from the meeting room

Sunset over Portree, Skye

Sunset over Portree, Skye

Data Integration in a Big Data Context

Today I had the pleasure of visiting the Urban Big Data Centre (UDBC) to give a seminar on Data Integration in a Big Data context (slides below). The idea for the seminar came about due to my collaboration with Nick Bailey (Associate Director of the UBDC) in the Administrative Research Data Centre for Scotland (ADRC-S). In […]

Today I had the pleasure of visiting the Urban Big Data Centre (UDBC) to give a seminar on Data Integration in a Big Data context (slides below). The idea for the seminar came about due to my collaboration with Nick Bailey (Associate Director of the UBDC) in the Administrative Research Data Centre for Scotland (ADRC-S).

In the seminar I wanted to highlight the challenges of data integration that arise in a Big Data context and show examples from my past work that would be relevant to those in the UBDC. In the presentation, I argue that RDF provides a good approach for data integration but it does not solve the basic challenges of messy data and generating mappings between datasets. It does however lay these challenges bare on the table, as Frank van Harmelen highlighted in his SWAT4LS keynote in 2013.

The first use case is drawn from my work on the EU SemSorGrid4Env project where we were developing an integrated view for emergency response planning. The particular use case shown is that of coastal flooding on the south coast of England. Although this project finished in 2011, I am still involved with developing RDF and SPARQL continuous data extensions; see the W3C RDF Stream Processing Community Group for details.

The second use case is drawn from my work on the EU Open PHACTS project. I showed the approach we developed for supporting user controlled views of the integrated data through Scientific Lenses. However, I also talked about the successes of the project and the fact that is currently being actively used for pharmacology research and receiving over 20million hits a month.

I finished the talk with an overview of the Administrative Data Research Centre for Scotland (ADRC-S) and my work on linking birth, marriage, and death records. I am hoping that we can adopt the lenses approach together with incorporating feedback on the linkages from the researchers who will use the integrated views.

In the discussions following the talk, the notion of FAIR data came up. This is the idea that data should be Findable, Accessible, Interoperable, and Reusable by both humans and machines. RDF is one approach that could lead to this. The other area of discussion was around community initiatives for converting existing open datasets into an RDF format. I advocated adopting the approach followed by the Bio2RDF community who share the tasks of creating and maintaining such scripts for biological datasets. An important part of this jigsaw is tracking the provenance of the datasets, for which the W3C Health Care and Life Sciences Community Profile for Dataset Descriptions could be beneficial (there is nothing specific to the HCLS community in the profile).