New Paper: Reproducibility with Administrative Data

Our journal article [1] looks at encouraging good practice to enable reproducible analysis of data analysis workflows. This is a result of a collaboration between social scientists and a computer scientist with the ADRC-Scotland. Abstract: Powerful new social science data resources are emerging. One particularly important source is administrative data, which were originally collected for organisational […]

Our journal article [1] looks at encouraging good practice to enable reproducible analysis of data analysis workflows. This is a result of a collaboration between social scientists and a computer scientist with the ADRC-Scotland.

Abstract: Powerful new social science data resources are emerging. One particularly important source is administrative data, which were originally collected for organisational purposes but often contain information that is suitable for social science research. In this paper we outline the concept of reproducible research in relation to micro-level administrative social science data. Our central claim is that a planned and organised workflow is essential for high quality research using micro-level administrative social science data. We argue that it is essential for researchers to share research code, because code sharing enables the elements of reproducible research. First, it enables results to be duplicated and therefore allows the accuracy and validity of analyses to be evaluated. Second, it facilitates further tests of the robustness of the original piece of research. Drawing on insights from computer science and other disciplines that have been engaged in e-Research we discuss and advocate the use of Git repositories to provide a useable and effective solution to research code sharing and rendering social science research using micro-level administrative data reproducible.

[1] [doi] C. J. Playford, V. Gayle, R. Connelly, and A. J. Gray, “Administrative social science data: The challenge of reproducible research,” Big Data & Society, vol. 3, iss. 2, 2016.
[Bibtex]
@Article{Playford2016BDS,
abstract = {Powerful new social science data resources are emerging. One particularly important source is administrative data, which were originally collected for organisational purposes but often contain information that is suitable for social science research. In this paper we outline the concept of reproducible research in relation to micro-level administrative social science data. Our central claim is that a planned and organised workflow is essential for high quality research using micro-level administrative social science data. We argue that it is essential for researchers to share research code, because code sharing enables the elements of reproducible research. First, it enables results to be duplicated and therefore allows the accuracy and validity of analyses to be evaluated. Second, it facilitates further tests of the robustness of the original piece of research. Drawing on insights from computer science and other disciplines that have been engaged in e-Research we discuss and advocate the use of Git repositories to provide a useable and effective solution to research code sharing and rendering social science research using micro-level administrative data reproducible.},
author = {Christopher J Playford and Vernon Gayle and Roxanne Connelly and Alasdair JG Gray},
title = {Administrative social science data: The challenge of reproducible research},
journal = {Big Data \& Society},
year = {2016},
OPTkey = {},
volume = {3},
number = {2},
OPTpages = {},
month = dec,
url = {http://journals.sagepub.com/doi/full/10.1177/2053951716684143},
doi = {10.1177/2053951716684143},
OPTnote = {},
OPTannote = {}
}

New Paper: Reproducibility with Administrative Data

Our journal article [1] looks at encouraging good practice to enable reproducible analysis of data analysis workflows. This is a result of a collaboration between social scientists and a computer scientist with the ADRC-Scotland. Abstract: Powerful new social science data resources are emerging. One particularly important source is administrative data, which were originally collected for organisational […]

Our journal article [1] looks at encouraging good practice to enable reproducible analysis of data analysis workflows. This is a result of a collaboration between social scientists and a computer scientist with the ADRC-Scotland.

Abstract: Powerful new social science data resources are emerging. One particularly important source is administrative data, which were originally collected for organisational purposes but often contain information that is suitable for social science research. In this paper we outline the concept of reproducible research in relation to micro-level administrative social science data. Our central claim is that a planned and organised workflow is essential for high quality research using micro-level administrative social science data. We argue that it is essential for researchers to share research code, because code sharing enables the elements of reproducible research. First, it enables results to be duplicated and therefore allows the accuracy and validity of analyses to be evaluated. Second, it facilitates further tests of the robustness of the original piece of research. Drawing on insights from computer science and other disciplines that have been engaged in e-Research we discuss and advocate the use of Git repositories to provide a useable and effective solution to research code sharing and rendering social science research using micro-level administrative data reproducible.

[1] [doi] C. J. Playford, V. Gayle, R. Connelly, and A. J. Gray, “Administrative social science data: The challenge of reproducible research,” Big Data & Society, vol. 3, iss. 2, 2016.
[Bibtex]
@Article{Playford2016BDS,
abstract = {Powerful new social science data resources are emerging. One particularly important source is administrative data, which were originally collected for organisational purposes but often contain information that is suitable for social science research. In this paper we outline the concept of reproducible research in relation to micro-level administrative social science data. Our central claim is that a planned and organised workflow is essential for high quality research using micro-level administrative social science data. We argue that it is essential for researchers to share research code, because code sharing enables the elements of reproducible research. First, it enables results to be duplicated and therefore allows the accuracy and validity of analyses to be evaluated. Second, it facilitates further tests of the robustness of the original piece of research. Drawing on insights from computer science and other disciplines that have been engaged in e-Research we discuss and advocate the use of Git repositories to provide a useable and effective solution to research code sharing and rendering social science research using micro-level administrative data reproducible.},
author = {Christopher J Playford and Vernon Gayle and Roxanne Connelly and Alasdair JG Gray},
title = {Administrative social science data: The challenge of reproducible research},
journal = {Big Data \& Society},
year = {2016},
OPTkey = {},
volume = {3},
number = {2},
OPTpages = {},
month = dec,
url = {http://journals.sagepub.com/doi/full/10.1177/2053951716684143},
doi = {10.1177/2053951716684143},
OPTnote = {},
OPTannote = {}
}

ISWC 2016 Trip Report

It has now been almost two months since ISWC 2016 where I was the Resources Track chair with Marta Sabou. This has given me time to reflect on the conference, in between a hectic schedule of project meetings, workshops, conferences, and a PhD viva. The most enjoyable part of the conference for me was the […]

It has now been almost two months since ISWC 2016 where I was the Resources Track chair with Marta Sabou. This has given me time to reflect on the conference, in between a hectic schedule of project meetings, workshops, conferences, and a PhD viva.

The most enjoyable part of the conference for me was the CoLD Workshop Debate on the State of Linked Data. The workshop organisers had arranged for six prominent proponents of the Linked Data to argue that we have failed and that Linked Data will die away.

  1. Ruben Verborgh argued that Linked Data will be destroyed by the need to centralise data, poor infrastructure, and the research community. (Aside: There was certainly concern on the final point as there were only three females in the room.)
  2. Axel Polleres took the moto, “Let’s make RDF great again!” Axel’s central argument was around the fact that most open data is actually published in CSV format and lots can be achieved with 3* open data.

  3. Paul Groth argued that we should be concentrating on making our data processable by machines. What we currently have is a format that is aimed at both but satisfies neither.

  4. Chris Bizer covered cost incentives. While there is an incentive to provide some basic schema markup on pages, i.e. getting picked up by search engines, there is no financial incentive to provide the links to other resources. My take on this is that there is a disincentive as it would take traffic away from your (eCommerce) site and therefore lose you revenue.
  5. Avi Bernstein then did a fantastic impression of a Wee Free minister and telling us that we had all sinned and were following the wrong path; all fire and brimstone.
  6. Juan Reutter argued that we needed to provide a workable ecosystem.

So the question is, has the Linked Data community failed? I think the debate highlighted that the community had made many contributions in a short space of time but that it is time to get this into the main stream. Perhaps our community is not the best for doing the required sales job, but we have had some success, e.g. EBI RDF platform, Open PHACTS Drug Discovery Platform, BBC Olympic Web Site.

The main conference was underpinned by three fantastic and varied keynotes. First was Kathleen McKeown who gave us insights into the extraction of knowledge from different forms of text. Second was Christian Bizer who’s main message was that we as a community need to take structured data in whatever form it comes; just like search engines have exploited metadata and page structure for a long time. Finally was Hiroaki Kitano from the Sony Corporation. This has got to be the densest keynote I have ever heard with more ideas per minute than a dance tune has beats. His challenge to the community was that we should aim to have an AI system win a scientific nobel prize by 2050. The system should develop a hypothesis, test it, and generate a ground breaking conclusion worthy of the prize.

There were many great and varied talks during the conference. It really is worth looking through the programme to find those of interest to you (all the papers are linked and available). As ever the poster and demo session, advertised in the minute madness session, demonstrated the breadth and cutting edge work going on in the community. As did the lightning talk session.

The final day of the conference was particularly weird for me. As the chair of a session I ended up sharing a bottle of fine Italian wine with a presenter during his talk, it would have been rude not to; and experiencing an earthquake during a presentation on an ontology for modelling the soil beneath our cities, in particular the causes of damage to that soil.

The conference afforded some opportunities for fun as well. A few of the organising committee managed to get visit the k-computer; the worlds fifth fastest super-computer which is cooled with water. The computer was revealed in a very James Bond, “Now I’m going to have to kill you!” reveal of the evil enemy’s master plan. There was also a highly entertaining Samurai sword fighting demonstration during the conference banquet.

During the conference, my Facebook feed was filled with exclamations about the complexity of the toilets. Following the conference, it was filled with exclamations of returning to lands of uncivilised toilets. Make of this what you will.

HCLS Tutorial at SWAT4LS 2016

On 5 December 2016 I presented a tutorial [1] on the Heath Care and Life Sciences Community Profile (HCLS Datasets) at the 9th International Semantic Web Applications and Tools for the Life Sciences Conference (SWAT4LS 2016). Below you can find the slides I presented. The 61 metadata properties from 18 vocabularies reused in the HCLS Community […]

On 5 December 2016 I presented a tutorial [1] on the Heath Care and Life Sciences Community Profile (HCLS Datasets) at the 9th International Semantic Web Applications and Tools for the Life Sciences Conference (SWAT4LS 2016). Below you can find the slides I presented.

The 61 metadata properties from 18 vocabularies reused in the HCLS Community Profile are available in this spreadsheet (.ods).

[1] M. Dumontier, A. J. G. Gray, and S. M. Marshall, “Describing Datasets with the Health Care and Life Sciences Community Profile,” in Semantic Web Applications and Tools for Life Sciences (SWAT4LS 2016), Amsterdam, The Netherlands, 2016.
[Bibtex]
@InProceedings{Gray2016SWAT4LSTutorial,
abstract = {Access to consistent, high-quality metadata is critical to finding, understanding, and reusing scientific data. However, while there are many relevant vocabularies for the annotation of a dataset, none sufficiently captures all the necessary metadata. This prevents uniform indexing and querying of dataset repositories. Towards providing a practical guide for producing a high quality description of biomedical datasets, the W3C Semantic Web for Health Care and the Life Sciences Interest Group (HCLSIG) identified Resource Description Framework (RDF) vocabularies that could be used to specify common metadata elements and their value sets. The resulting HCLS community profile covers elements of description, identification, attribution, versioning, provenance, and content summarization. The HCLS community profile reuses existing vocabularies, and is intended to meet key functional requirements including indexing, discovery, exchange, query, and retrieval of datasets, thereby enabling the publication of FAIR data. The resulting metadata profile is generic and could be used by other domains with an interest in providing machine readable descriptions of versioned datasets. The goal of this tutorial is to explain elements of the HCLS community profile and to enable users to craft and validate descriptions for datasets of interest.},
author = {Michel Dumontier and Alasdair J. G. Gray and M. Scott Marshall},
title = {Describing Datasets with the Health Care and Life Sciences Community Profile},
OPTcrossref = {},
OPTkey = {},
booktitle = {Semantic Web Applications and Tools for Life Sciences (SWAT4LS 2016)},
year = {2016},
OPTeditor = {},
OPTvolume = {},
OPTnumber = {},
OPTseries = {},
OPTpages = {},
month = dec,
address = {Amsterdam, The Netherlands},
OPTorganization = {},
OPTpublisher = {},
note = {(Tutorial)},
url = {http://www.swat4ls.org/workshops/amsterdam2016/tutorials/t2/},
OPTannote = {}
}

HCLS Tutorial at SWAT4LS 2016

On 5 December 2016 I presented a tutorial [1] on the Heath Care and Life Sciences Community Profile (HCLS Datasets) at the 9th International Semantic Web Applications and Tools for the Life Sciences Conference (SWAT4LS 2016). Below you can find the slides I presented. The 61 metadata properties from 18 vocabularies reused in the HCLS Community […]

On 5 December 2016 I presented a tutorial [1] on the Heath Care and Life Sciences Community Profile (HCLS Datasets) at the 9th International Semantic Web Applications and Tools for the Life Sciences Conference (SWAT4LS 2016). Below you can find the slides I presented.

The 61 metadata properties from 18 vocabularies reused in the HCLS Community Profile are available in this spreadsheet (.ods).

[1] M. Dumontier, A. J. G. Gray, and S. M. Marshall, “Describing Datasets with the Health Care and Life Sciences Community Profile,” in Semantic Web Applications and Tools for Life Sciences (SWAT4LS 2016), Amsterdam, The Netherlands, 2016.
[Bibtex]
@InProceedings{Gray2016SWAT4LSTutorial,
abstract = {Access to consistent, high-quality metadata is critical to finding, understanding, and reusing scientific data. However, while there are many relevant vocabularies for the annotation of a dataset, none sufficiently captures all the necessary metadata. This prevents uniform indexing and querying of dataset repositories. Towards providing a practical guide for producing a high quality description of biomedical datasets, the W3C Semantic Web for Health Care and the Life Sciences Interest Group (HCLSIG) identified Resource Description Framework (RDF) vocabularies that could be used to specify common metadata elements and their value sets. The resulting HCLS community profile covers elements of description, identification, attribution, versioning, provenance, and content summarization. The HCLS community profile reuses existing vocabularies, and is intended to meet key functional requirements including indexing, discovery, exchange, query, and retrieval of datasets, thereby enabling the publication of FAIR data. The resulting metadata profile is generic and could be used by other domains with an interest in providing machine readable descriptions of versioned datasets. The goal of this tutorial is to explain elements of the HCLS community profile and to enable users to craft and validate descriptions for datasets of interest.},
author = {Michel Dumontier and Alasdair J. G. Gray and M. Scott Marshall},
title = {Describing Datasets with the Health Care and Life Sciences Community Profile},
OPTcrossref = {},
OPTkey = {},
booktitle = {Semantic Web Applications and Tools for Life Sciences (SWAT4LS 2016)},
year = {2016},
OPTeditor = {},
OPTvolume = {},
OPTnumber = {},
OPTseries = {},
OPTpages = {},
month = dec,
address = {Amsterdam, The Netherlands},
OPTorganization = {},
OPTpublisher = {},
note = {(Tutorial)},
url = {http://www.swat4ls.org/workshops/amsterdam2016/tutorials/t2/},
OPTannote = {}
}

Celebrating 50 years of Computer Science at HWU

This year sees a double celebration in the Department of Computer Science at Heriot-Watt University – it is 50 years since we launched the first BSc Computer Science degree in Scotland, and 50 years since Heriot-Watt was granted university status. To celebrate we had a series of events last week including an open day and […]

Old hardware

Display of old equipment used within computer science.

This year sees a double celebration in the Department of Computer Science at Heriot-Watt University – it is 50 years since we launched the first BSc Computer Science degree in Scotland, and 50 years since Heriot-Watt was granted university status. To celebrate we had a series of events last week including an open day and dinner for former staff and students.

During the open day we had a variety of displays and activities to highlight the current research taking place in the department. There was a display of some of the old equipment that has been used in the department. While this mostly focused on storage mediums, it also included my first computer – a BBC model B. Admittedly there was a lot of games played on it in my youth.

Pepper robot

Demonstration of the Pepper robot that is being used by the Interaction lab to improve speech interactions.

Each of the labs in the department had displays, including the new Pepper robot in the Interaction Lab and one of the Nao robots from the Robotics Lab. The Interactive and Trustworthy Technologies Lab were demonstrating the interactive games they have developed to help with rehabilitation after falls and knee replacements. The Semantic Web Lab were demonstrating the difficulties of reconstructing a family tree using vital records information.

At the dinner in the evening we had two guest speakers. Alex Balfour, the first head of department and instigator of the degree programme, and Ian Ritchie, entrepreneur and former graduate. Both gave entertaining speeches reflecting their time in the department, and their experiences of the Mountbatten Building, now the Apex Hotel in the Grassmarket where we had the dinner.

See these pages for more about the history of computer science at Heriot-Watt.

Genealogy reconstruction game

Current PhD students attempting to reconstruct a family tree from their entries in the birth, marriage, and death records.

rehab-game

Game to help rehabilitation patients perform their physiotherapy exercises correctly.

Celebrating 50 years of Computer Science at HWU

This year sees a double celebration in the Department of Computer Science at Heriot-Watt University – it is 50 years since we launched the first BSc Computer Science degree in Scotland, and 50 years since Heriot-Watt was granted university status. To celebrate we had a series of events last week including an open day and […]

Old hardware

Display of old equipment used within computer science.

This year sees a double celebration in the Department of Computer Science at Heriot-Watt University – it is 50 years since we launched the first BSc Computer Science degree in Scotland, and 50 years since Heriot-Watt was granted university status. To celebrate we had a series of events last week including an open day and dinner for former staff and students.

During the open day we had a variety of displays and activities to highlight the current research taking place in the department. There was a display of some of the old equipment that has been used in the department. While this mostly focused on storage mediums, it also included my first computer – a BBC model B. Admittedly there was a lot of games played on it in my youth.

Pepper robot

Demonstration of the Pepper robot that is being used by the Interaction lab to improve speech interactions.

Each of the labs in the department had displays, including the new Pepper robot in the Interaction Lab and one of the Nao robots from the Robotics Lab. The Interactive and Trustworthy Technologies Lab were demonstrating the interactive games they have developed to help with rehabilitation after falls and knee replacements. The Semantic Web Lab were demonstrating the difficulties of reconstructing a family tree using vital records information.

At the dinner in the evening we had two guest speakers. Alex Balfour, the first head of department and instigator of the degree programme, and Ian Ritchie, entrepreneur and former graduate. Both gave entertaining speeches reflecting their time in the department, and their experiences of the Mountbatten Building, now the Apex Hotel in the Grassmarket where we had the dinner.

See these pages for more about the history of computer science at Heriot-Watt.

Genealogy reconstruction game

Current PhD students attempting to reconstruct a family tree from their entries in the birth, marriage, and death records.

rehab-game

Game to help rehabilitation patients perform their physiotherapy exercises correctly.

Seminar: Data Integration Support for Offshore Decommissioning Waste Management

Oli RigDate: 11:15, 26 September 2016

Venue: F.17. Colin Maclaurin Building, Heriot-Watt University

Title: Data Integration Support for Offshore Decommissioning Waste Management

Speaker: Abiodun Akinyemi, School of Energy, Geoscience, Infrastructure and Society (EGIS), Heriot-Watt University

Abstract: Offshore decommissioning activities represent a significant business opportunity for UK contracting and consulting companies, albeit they constitute liability to the owners of the assets – because of the cost – and UK government – because of tax relief. The silver lining is that waste reuse can bring some reprieve as savings from the sales of decommissioned facility items can reduce the overall removal cost to an asset owner. However, characterizing an asset inventory to determine which decommissioned facility items can be reused is prone to errors because of the difficulty involved in integrating asset data from different sources in a meaningful way. This research investigates a data integration framework, which enables rapid assessment of items to be decommissioned, to inform circular economy principles. It evaluates existing practices in the domain and devises a mechanisms for higher productivity using the semantic web and ISO 15926.

Bio: Abiodun Akinyemi is a PhD student at the School of Energy, Geoscience, Infrastructure and Society at Heriot-Watt University. He has an MPhil in Engineering from the University of Cambridge and has worked on Asset Information Management in the oil and gas industry for over 8 years.

Will the real Kevin Macleod please line up?

Last week I attended the Digitising Scotland Project Colloquium at Raasay House (featured image above) on the Isle of Raasay. The colloquium was a gathering of historians and computer scientists to discuss the challenges of linking the vital records of the people of Scotland between 1851 and 1974. The Digitising Scotland Project is having the birth, marriage, […]

Last week I attended the Digitising Scotland Project Colloquium at Raasay House (featured image above) on the Isle of Raasay. The colloquium was a gathering of historians and computer scientists to discuss the challenges of linking the vital records of the people of Scotland between 1851 and 1974.

The Digitising Scotland Project is having the birth, marriage, and death records of Scotland transcribed from the scans of the original hand written registration books. This process is not without its own challenges, try reading this birth record of a famous Scottish artist and architect, but the focus of the colloquium was on what happens after the records have been transcribed.

Each Scottish vital record identifies several individuals, e.g. on a birth record you will have the baby, their parents, the informant, and the registrar. The same individuals will appear on multiple records relating to events in their own life, e.g. an individual will have a birth record, potentially one or more marriage records, and a death record, assuming that they have not emigrated. They can also appear in the records of other individuals, e.g. as a mother on a birth record, the mother-of-the-bride on a marriage record, or the doctor on a death record. The challenge is how to identify the same individual across all the records, when all you have is a name (first and last) and potentially the age.

The problem is compounded in an area like Skye, which was one of the focus regions of the Digitising Scotland project, because there is a relatively small distribution of names on which to draw upon. For example, a name like Kevin Macleod will appear on multiple records. In some cases the name will correspond to a single Kevin Macleod, in other cases it will be a closely related Kevin Macleod, e.g. Kevin Macleod the father of Kevin Macleod, and in others the two Kevin Macleods will not be related at all. The challenge is how to develop a computer algorithm that is capable of making these distinctions.

The colloquium was a great opportunity for historians and computer scientists to discuss the challenges and help each other to develop a solution. However, first we had to agree on a common understanding for terms such as “record” and “individual”.

Overall, we made great progress on exchanging ideas and techniques. We heard how similar challenges are being addressed in a related project focusing on North Orkney, how historians approach the record linkage challenge, and about work for automatically classifying causes of death to their ICD10 code and jobs to HISCO. There was also time to socialise and enjoy some of the scenery of Raasay, which is a beautiful island the size of Manhattan but with a population of only 160.

View from the meeting room

View from the meeting room

Sunset over Portree, Skye

Sunset over Portree, Skye