ISWC 2016 Deadlines Approaching

ISWC 2016 will be taking place in Kobe, Japan from 17-21 October. Tomorrow is the deadline for abstract submissions for ISWC, with full papers due on 30 April. There are three tracks for you to submit to:

  1. The Research Track: innovative and groundbreaking work on the cross between semantics and the web.
  2. The Applications Track: benefits and challenges of applying semantic technologies. This track is accepting three different types of submissions on in-use applications, industry applications and industry applications.
  3. The Resources Track: reusable resources like datasets, ontologies, benchmarks and tools are crucial for many research disciplines and especially ours. Make sure you read the guidelines for describing a reusable resources.

To entice you to come to Kobe, Japan, there are three fantastic keynotes lined up:

  • Kathleen McKeown – Professor of Computer Science at Columbia University,
    Director of the Institute for Data Sciences and Engineering, and Director of the North East Big Data Hub.
  • Hiroaki Kitano – CEO of Sony Computer Science Laboratory and President of the systems biology institute. A truly inspirational figure who has done everything from RoboCup to systems biology. He was even an invited artist at MoMA.
  • Chris Bizer – Professor at the Univesity of Mannheim and Director of the Institute of Computer Science and Business Informatics there. If you’re in the Semantic Web community – you know the amazing work Chris has done. He really kicked the entire move toward Linked Data into high gear.

I am co-chairing the Resources Track with Marta Sabou. I hope to be able to welcome you to Kobe.

Thanks to Paul Groth as the text for this post is based on his post from a month ago.

The FAIR Principles herald more open, transparent, and reusable scientific data

FAIR Article PosterToday, March 15 2016, the FAIR Guiding Principles for scientific data management and stewardship were formally published in the Nature Publishing Group journal Scientific Data. The problem the FAIR Principles address is the lack of widely shared, clearly articulated, and broadly applicable best practices around the publication of scientific data. While the history of scholarly publication in journals is long and well established, the same cannot be said of formal data publication. Yet, data could be considered the primary output of scientific research, and its publication and reuse is necessary to ensure validity, reproducibility, and to drive further discoveries. The FAIR Principles address these needs by providing a precise and measurable set of qualities a good data publication should exhibit – qualities that ensure that the data is Findable, Accessible, Interoperable, and Reusable (FAIR).

The principles were formulated after a Lorentz Center workshop in January, 2014 where a diverse group of stakeholders, sharing an interest in scientific data publication and reuse, met to discuss the features required of contemporary scientific data publishing environments. The first-draft FAIR Principles were published on the Force11 website for evaluation and comment by the wider community – a process that lasted almost two years. This resulted in the clear, concise, broadly-supported principles that were published today. The principles support a wide range of new international initiatives, such as the European Open Science Cloud and the NIH Big Data to Knowledge (BD2K), by providing clear guidelines that help ensure all data and associated services in the emergent ‘Internet of Data’ will be Findable, Accessible, Interoperable and Reusable, not only by people, but notably also by machines.

The recognition that computers must be capable of accessing a data publication autonomously, unaided by their human operators, is core to the FAIR Principles. Computers are now an inseparable companion in every research endeavour. Contemporary scientific datasets are large, complex, and globally-distributed, making it almost impossible for humans to manually discover, integrate, inspect and interpret them. This (re)usability barrier has, until now, prevented us from maximizing the return-on-investment from the massive global financial support of big data research and development projects, especially in the life and health sciences. This wasteful barrier has not gone unnoticed by key agencies and regulatory bodies. As a result, rigorous data management stewardship – applicable to both human and computational “users” – will soon become a funded, core activity within modern research projects. In fact, FAIR-oriented data management activities will increasingly be made mandatory by public funding bodies.

The high level of abstraction of the FAIR Principles, sidestepping controversial issues such as the technology or approach used in the implementation, has already made them acceptable to a variety of research funding bodies and policymakers. Examples include FAIR Data workshops from EU-ELIXIR, inclusion of FAIR in the future plans of Horizon 2020, and advocacy from the American National Institutes of Health. As such, it seems assured that these principles will rapidly become a key basis for innovation in the global move towards Open Science environments. Therefore, the timing of the Principles publication is aligned with the Open Science Conference in April 2016.

With respect to Open Science, the FAIR Principles advocate being “intelligently open”, rather than “religiously open”. The Principles do not propose that all data should be freely available – in particular with respect to privacy-sensitive data. Rather, they propose that all data should be made available for reuse under clearly-defined conditions and licenses, available through a well-defined process, and with proper and complete acknowledgement and citation.This will allow much wider participation of players from, for instance, the biomedical domain and industry where rigorous and transparent data usage conditions are a core requirement for data reuse.

“I am very proud that just over two years after the meeting where we came up with the early FAIR Principles. They play such an important role in many forward looking policy documents around the world and the authors on this paper are also in positions that allow them to follow these Principles. I sincerely hope that FAIR data will become a ‘given’ in the future of Open Science, in the Netherlands and globally”, says Barend Mons, Professor in Biosemantics at the Leiden University Medical Center.

Open PHACTS is dead, long live Open PHACTS!

I have spent the last five years working on the Open PHACTS project which is sadly at an end. However it is not the end of the Open PHACTS drug discovery platform. We have transitioned to a new era of a foundation organisation running and developing the platform. The milestone was marked by the symbolic handover of the Open PHACTS flag (see photo of on the right Barend Mons (Leiden Medical Center) and Gerhard Ecker (University of Vienna) handing the flag to on the left Stefan Senger (GlaxoSmithKline), Derek Marren (Eli Lilly), and Herman van Vlijmen (Janssen Pharmaceutica).

A nice summary of the closing symposium is available:

Linking Life Science Data: Design to Implementation, and Beyond

19 Feb, 2016 Open PHACTS project closing conference (Vienna, Austria)

On 18–19 February, 2016, we celebrated the completion of the Open PHACTS project with a conference at the University of Vienna, Austria. A total of 79 people attended to discuss the achievements of the Open PHACTS project, what they mean for the future of linked data, and how they can be carried forward.

Source: Linking Life Science Data: Design to Implementation, and Beyond – Open PHACTS Foundation


Open PHACTS Closing Symposium

For the last 5 years I have had the pleasure of working with the Open PHACTS project. Sadly, the project is now at an end. To celebrate we are having a two day symposium to look over the contributions of the project and its future legacy.

The project has been hugely successful in developing an integrated data platform to enable drug discovery research (see a future post for details to support this claim). The result of the project is the Open PHACTS Foundation which will now own the drug discovery platform and sustain its development into the future.

Here are my slides on the state of the data in the Open PHACTS 2.0 platform.

Validata: An online tool for testing RDF data conformance

Validata is an online web application for validating an RDF document against a set of constraints. This is useful for data exchange applications or ensuring conformance of an RDF dataset against a community agreed standard. Constraints are expressed as a Shape Expression (ShEx) schema.
Validata extends the ShEx functionality to support multiple requirement levels. Validata can be repurposed for different deployments by providing it with a new ShEx schema.

The Validata code is available from Existing deployments are available for:

Paper published at SWAT4LS2015.