Interoperability and FAIRness through a novel combination of Web technologies

New paper [1] on using Semantic Web technologies to publish existing data according to the FAIR data principles [2]. Abstract: Data in the life sciences are extremely diverse and are stored in a broad spectrum of repositories ranging from those designed for particular data types (such as KEGG for pathway data or UniProt for protein […]

New paper [1] on using Semantic Web technologies to publish existing data according to the FAIR data principles [2].

Abstract: Data in the life sciences are extremely diverse and are stored in a broad spectrum of repositories ranging from those designed for particular data types (such as KEGG for pathway data or UniProt for protein data) to those that are general-purpose (such as FigShare, Zenodo, Dataverse or EUDAT). These data have widely different levels of sensitivity and security considerations. For example, clinical observations about genetic mutations in patients are highly sensitive, while observations of species diversity are generally not. The lack of uniformity in data models from one repository to another, and in the richness and availability of metadata descriptions, makes integration and analysis of these data a manual, time-consuming task with no scalability. Here we explore a set of resource-oriented Web design patterns for data discovery, accessibility, transformation, and integration that can be implemented by any general- or special-purpose repository as a means to assist users in finding and reusing their data holdings. We show that by using off-the-shelf technologies, interoperability can be achieved at the level of an individual spreadsheet cell. We note that the behaviours of this architecture compare favourably to the desiderata defined by the FAIR Data Principles, and can therefore represent an exemplar implementation of those principles. The proposed interoperability design patterns may be used to improve discovery and integration of both new and legacy data, maximizing the utility of all scholarly outputs.

[1] [doi] Mark D. Wilkinson, Ruben Verborgh, Luiz Olavo {Bonino da Silva Santos}, Tim Clark, Morris A. Swertz, Fleur D. L. Kelpin, Alasdair J. G. Gray, Erik A. Schultes, Erik M. van Mulligen, Paolo Ciccarese, Arnold Kuzniar, Anand Gavai, Mark Thompson, Rajaram Kaliyaperumal, Jerven T. Bolleman, and Michel Dumontier. Interoperability and FAIRness through a novel combination of Web technologies. PeerJ Computer Science, 3:e110, apr 2017.
[Bibtex]
@article{Wilkinson2017-FAIRness,
abstract = {Data in the life sciences are extremely diverse and are stored in a broad spectrum of repositories ranging from those designed for particular data types (such as KEGG for pathway data or UniProt for protein data) to those that are general-purpose (such as FigShare, Zenodo, Dataverse or EUDAT). These data have widely different levels of sensitivity and security considerations. For example, clinical observations about genetic mutations in patients are highly sensitive, while observations of species diversity are generally not. The lack of uniformity in data models from one repository to another, and in the richness and availability of metadata descriptions, makes integration and analysis of these data a manual, time-consuming task with no scalability. Here we explore a set of resource-oriented Web design patterns for data discovery, accessibility, transformation, and integration that can be implemented by any general- or special-purpose repository as a means to assist users in finding and reusing their data holdings. We show that by using off-the-shelf technologies, interoperability can be achieved atthe level of an individual spreadsheet cell. We note that the behaviours of this architecture compare favourably to the desiderata defined by the FAIR Data Principles, and can therefore represent an exemplar implementation of those principles. The proposed interoperability design patterns may be used to improve discovery and integration of both new and legacy data, maximizing the utility of all scholarly outputs.},
author = {Wilkinson, Mark D. and Verborgh, Ruben and {Bonino da Silva Santos}, Luiz Olavo and Clark, Tim and Swertz, Morris A. and Kelpin, Fleur D.L. and Gray, Alasdair J.G. and Schultes, Erik A. and van Mulligen, Erik M. and Ciccarese, Paolo and Kuzniar, Arnold and Gavai, Anand and Thompson, Mark and Kaliyaperumal, Rajaram and Bolleman, Jerven T. and Dumontier, Michel},
doi = {10.7717/peerj-cs.110},
issn = {2376-5992},
journal = {PeerJ Computer Science},
month = {apr},
pages = {e110},
publisher = {PeerJ Inc.},
title = {{Interoperability and FAIRness through a novel combination of Web technologies}},
url = {https://peerj.com/articles/cs-110},
volume = {3},
year = {2017}
}
[2] [doi] Mark D. Wilkinson, Michel Dumontier, IJsbrand Jan Aalbersberg, Gabrielle Appleton, Myles Axton, Arie Baak, Niklas Blomberg, Jan-Willem Boiten, Luiz Bonino {da Silva Santos}, Philip E. Bourne, Jildau Bouwman, Anthony J. Brookes, Tim Clark, Mercè Crosas, Ingrid Dillo, Olivier Dumon, Scott Edmunds, Chris T. Evelo, Richard Finkers, Alejandra Gonzalez-Beltran, Alasdair J. G. Gray, Paul Groth, Carole Goble, Jeffrey S. Grethe, Jaap Heringa, Peter A. C. {‘t Hoen}, Rob Hooft, Tobias Kuhn, Ruben Kok, Joost Kok, Scott J. Lusher, Maryann E. Martone, Albert Mons, Abel L. Packer, Bengt Persson, Philippe Rocca-Serra, Marco Roos, Rene van Schaik, Susanna-Assunta Sansone, Erik Schultes, Thierry Sengstag, Ted Slater, George Strawn, Morris A. Swertz, Mark Thompson, Johan van der Lei, Erik van Mulligen, Jan Velterop, Andra Waagmeester, Peter Wittenburg, Katherine Wolstencroft, Jun Zhao, and Barend Mons. The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data, 3:160018, 2016.
[Bibtex]
@article{Wilkinson2016,
abstract = {There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders-representing academia, industry, funding agencies, and scholarly publishers-have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the {FAIR} Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the {FAIR} Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the {FAIR} Principles, and includes the rationale behind them, and some exemplar implementations in the community.},
author = {Wilkinson, Mark D and Dumontier, Michel and Aalbersberg, IJsbrand Jan and Appleton, Gabrielle and Axton, Myles and Baak, Arie and Blomberg, Niklas and Boiten, Jan-Willem and {da Silva Santos}, Luiz Bonino and Bourne, Philip E and Bouwman, Jildau and Brookes, Anthony J and Clark, Tim and Crosas, Merc{\`{e}} and Dillo, Ingrid and Dumon, Olivier and Edmunds, Scott and Evelo, Chris T and Finkers, Richard and Gonzalez-Beltran, Alejandra and Gray, Alasdair J.G. and Groth, Paul and Goble, Carole and Grethe, Jeffrey S and Heringa, Jaap and {'t Hoen}, Peter A.C and Hooft, Rob and Kuhn, Tobias and Kok, Ruben and Kok, Joost and Lusher, Scott J and Martone, Maryann E and Mons, Albert and Packer, Abel L and Persson, Bengt and Rocca-Serra, Philippe and Roos, Marco and van Schaik, Rene and Sansone, Susanna-Assunta and Schultes, Erik and Sengstag, Thierry and Slater, Ted and Strawn, George and Swertz, Morris A and Thompson, Mark and van der Lei, Johan and van Mulligen, Erik and Velterop, Jan and Waagmeester, Andra and Wittenburg, Peter and Wolstencroft, Katherine and Zhao, Jun and Mons, Barend},
doi = {10.1038/sdata.2016.18},
issn = {2052-4463},
journal = {Scientific Data},
month = mar,
pages = {160018},
publisher = {Macmillan Publishers Limited},
title = {{The FAIR Guiding Principles for scientific data management and stewardship}},
url = {http://www.nature.com/articles/sdata201618},
volume = {3},
year = {2016}
}

Does Open Scottish Government Data Justify the Proposed Edinburgh Tram Extension?

The Scottish government provides open access to a range of official statistics about Scotland across 17 themes, including crime, the economy, housing, and population growths. The portal is accessible at http://statistics.gov.scot.

This post uses open Scottish government data to measure the 5 year population growth in areas in close proximity to the proposed completion of phase 1a of the tram extension, from York Place to Newhaven.

The tram infrastructure business case

The original business case for the tram infrastructure was set out in a document Edinburgh Tram Draft Final Business Case (The City of Edinburgh Council, December 2006). It justifies the need for a tram system using two broad arguments:

  1. The predicted growth in housing demands for a growing population in the areas on the proposed lines:

“Huge new developments are in the pipeline, especially on the city’s waterfront in Leith Docks and Granton. Edinburgh Waterfront is the largest brownfield development in Scotland, equivalent to a major new town in scale, with the two major development sites able to accommodate up to 29,000 new homes in the longer term (Section 3.3, Why Tram).”

2. The tram system will encourage economic regeneration and social inclusion:

“Without the tram, access to the major Waterfront developments will simply not be good enough. The Leith Docks proposals would have to be scaled down and the development prospects at Granton would be damaged (Section 3.3, Why Tram). Areas of [..] a zone around Leith Walk [..] are areas where socio economic status is considerably less affluent than surrounding areas and where employment, income levels and car ownership tend to be comparatively low. Opportunities for people living in these areas will be improved by direct connection via tram to the City Centre and other employment areas, including the new development in Granton, Leith and the West of the City at Edinburgh Park and the Airport (Section 1.18, Accessibility and Social Inclusion).”

This document, written 11 years ago, made various projections of population growths, particularly in the north and north east of Edinburgh. For example, it states that 24,000 new houses will be needed in Edinburgh by 2015 (Section 3.3, Why Tram). The Scottish government open data portal can be used to assess the accuracy of those projections.

Party political views on the proposed extension

Due to well documented economic costs and major project problems, the tram stretches only as far as York Place. Edinburgh City Council are considering an extension from York Place down Leith Walk, to Newhaven. It could be partly funded by a City Deal. A definitive yes/no decision on the extension will be made after the local council elections take place on 4th May 2017. The Labour party are in favour, the SNP have recently set conditions on their backing of the extension, whilst the Conservative party have promised to reject the business case for the extension in their council election manifesto.

Zones along the proposed tram extension

The long term proposed route connects Granton to the west end and Newhaven to York Place, and a line between Granton and Newhaven to complete the ring. The route is available on the Edinburgh Local Development Plan :

Proposed extension route

Below I focus on the 5 year plan of extending the line from York Place to Newhaven. In the Scottish government open dataset, there are 1279 “intermediate geographical zones”. Below depicts the intermediate zones in close proximity to the proposed extension down Leith Walk and to Newhaven:

Broughton
Broughton South
Broughton
Easter Road and Hawkhill Avenue
Broughton
Great Junction Street
Broughton
North Leith and Newhaven
Broughton
Pilrig
Broughton
The Shore and Constitution Street
Western Harbour and Leith Docks
Western Harbour and Leith Docks

Population growth along the proposed extension

The Scottish government open data portal allows us to query the population growth in each Scottish city, and also within each intermediate zone. Below shows the rate of growth from 2011 to 2015, first at the city level and then at the intermediate zone level.


National population growth

Tram extension proximity population growth

Here are 3 observations about these graphs:

  1. The population of Edinburgh city is growing faster than Glasgow, Dundee, Stirling and Aberdeen. Between 2011 and 2015, the population of Edinburgh has grown by 4.4%.
  2. The population of the intermediate zones in close proximity to the proposed Edinburgh tram extension is growing quickly. Of the 7 identified zones, the population of 5 zones is growing at a faster rate than the average growth rate across Edinburgh.
  3. The population of the Western Harbour and Leith Docks zone is growing at a very rapid pace, where the population grew by 22.7% between 2011 and 2015.

Reproducing this analysis

These graphs are generated by

  1. Running a SPARQL query against the Scottish government dataset service.
  2. Plotting graphs from the CSV result set using R.

The SPARQL queries and plotting scripts used for the material in this post are available online at https://github.com/robstewart57/edinburgh-trams .

Summary

This post uses the open Scottish government data to show that the population growth along the proposed tram route extension from York Place to Newhaven is greater than the average for Edinburgh between 2011 and 2015 (zonal growths of up to 23%), a city which itself has the fastest growing population compared to 4 other Scottish cities.

This data validates the 2006 projection that growth in housing demands will increase in the north east of Edinburgh, although measuring any discrepancies in exact numbers is difficult because the 2006 projections were not cross referenced to the intermediate zones that the open dataset uses.

The population data used here is only one of the 17 themes in the Scottish government’s dataset. An extension to this study would explore the other business cases for the tram, namely economic regeneration and social inclusion using the other 16 themes, to evaluate the completeness and granularity of economic and societal data at local levels in the Scottish government dataset.

Thoughts on Support for Technology Enhanced Learning in HE

I was asked to put forward my thoughts on how I thought the use of technology to enhance teaching and learning should be supported where I work. I work in a UK University that has campuses overseas, and which is organised into Schools (Computer Science is in a School with Maths, to form one of the smaller schools). … Continue reading Thoughts on Support for Technology Enhanced Learning in HE

The post Thoughts on Support for Technology Enhanced Learning in HE appeared first on Sharing and learning.

I was asked to put forward my thoughts on how I thought the use of technology to enhance teaching and learning should be supported where I work. I work in a UK University that has campuses overseas, and which is organised into Schools (Computer Science is in a School with Maths, to form one of the smaller schools). This was my first round brain dump on the matter. It looks like something might come of it, so I’m posting it here asking for comments.

Does any of this look wrong?

Do you/ have you worked in a similar or dissimilar unit and have any suggestions for how well that worked?

What would be the details that need more careful thought?

Get in touch directly by email or use the form below (if the latter let me know if you don’t want your reply publishing).

Why support Technology Enhanced Learning (TEL)?

Why would you not? This isn’t about learning technology for its own sake, it’s about enhancing learning and teaching with technology. Unless you deny that technology can in any way enhance teaching and learning, the questions remaining centre on how can technology help and how much is that worth. Advances in technology and in our understanding of how to use it in teaching and learning create a “zone of possibility,” the extent of which and success of how it is exploited depend on the intersection of teacher’s understanding of the technologies being offered and the pedagogies suitable for their subject (Dirkin  & Mishra, 2010 [paywalled 🙁 ]).

Current examples of potential enhancement which is largely unsupported (or supported only by ad hoc provision) include

  • Online exams in computer science
  • Formative assessment and other formative exercises across the school
  • Providing resources for students learning off-campus
  • Supporting the delivery of course material when students won’t attend lectures
  • Providing course information to students

Location of support: in School, by campus, or central services?

There are clearly some services that apply institution wide (VLE), or need to be supported at each campus (computer labs), however there are dangers to centralising too much. Centralisation creates a division between the support and the people who need it, a division which is reinforced by separation of funding and management lines for the service and the academic provision. This division makes it difficult for those who understand the technology and those who understand the pedagogy of the subject being taught to engage around the problems to be solved. Instead they interact but stay within the remits laid down by their management structures.

There should of course be strong links between the support in my School and others, central support and campus specific support, but an arrangement where these links are prioritised over the link between support for TEL in maths and computing and the provision of teaching and learning in maths and computer science seems wrong.

What support?

This is something of a brain dump based on current activity, in no particular order.

  • Seminar series and other regular meetings to gather and spread new ideas.
  • Developing resources for off-campus learning (currently we need in CS to provide support materials based on existing courses for a specific programme) these and similar materials could also be used to support students on conventional courses who don’t attend lectures.
  • Managing tools and systems for formative assessment and other formative experiences, e.g. mathematical and programming practice.
  • Developing resources and systems for working with partner institutions who deliver courses we accredit, some of which may be applicable to mainstream teaching.
  • Student course information website: maintenance and updating information, liaison with central student portal.
  • Online exams, advice on question design and managing workflow from question authoring to test delivery.
  • Evaluation of innovative teaching (where innovative is defined as something for which we are unsure enough of the benefits for it to be worth evaluating).[*]
  • Maintain links with development organisations in Learning Technology, e.g. ALT and Jisc and scholarship in areas such as digital pedagogy and open education which underpin technology enhanced learning.
  • Liaise with central & campus services, e.g. VLE management group
  • Advise staff in school on use of central facilities e.g. BlackBoard
  • Liaise with other schools. There is potential to provide some of these services to other schools (or vice versa), assuming financial recompense can be arranged.

[*Note: this raises the question of whether the support should be limited to technology to enhance learning, should address other innovations too.]

Who?

This needs to be provided by a core of people with substantial knowledge of learning technology, who might also contribute to other activities in the school.  We have a group of three or four people who can do this. It is a little biased to Computer Science and to one campus so there should be thought given to how to bring in other subjects and locations.

We would involve project students and interns provided this was done in such a way as to contribute sustainable enhancement of a service or creation of new resources. For example, we would use of tools such as git so that each student left work that could be picked up by others. As well as supervising project students within the group we could co-supervise with academic staff who had their own ideas for learning-related student projects. This would help keep tight contacts with day-to-day teaching.

Funding and management

This support needs an allocated budget and well controlled project management. Funding for core staff should be long term on a par with commitment to teaching within the School. Management and reporting should be through the Director of Learning and Teaching and the Learning and Teaching Committee with information and discussion at the subject Boards of Studies as appropriate.

Reference

Dirkin, K., & Mishra, P. (2010). Values, Beliefs, and Perspectives: Teaching Online within the Zone of Possibility Created by Technology Retrieved from https://www.learntechlib.org/p/33974/

 

 

Comments Please

The post Thoughts on Support for Technology Enhanced Learning in HE appeared first on Sharing and learning.

Flying cars, digital literacy and the zone of possibility

Where’s my flying car? I was promised one in countless SF films from Metropolis through to Fifth Element. Well, they exist.  Thirty seconds on the search engine of your choice will find you a dozen of so working prototypes (here’s a YouTube video with five). They have existed for some time.  Come to think about it, … Continue reading Flying cars, digital literacy and the zone of possibility

The post Flying cars, digital literacy and the zone of possibility appeared first on Sharing and learning.

Where’s my flying car? I was promised one in countless SF films from Metropolis through to Fifth Element. Well, they exist.  Thirty seconds on the search engine of your choice will find you a dozen of so working prototypes (here’s a YouTube video with five).

A fine and upright gentle man flying in a small helicopter like vehicle.
Jess Dixon’s flying automobile c. 1940. Public Domain, held by State Library and Archives of Florida, via Flickr.

They have existed for some time.  Come to think about it, the driving around on the road bit isn’t really the point. I mean, why would you drive when you could fly. I guess a small helicopter and somewhere to park would do.

So it’s not lack of technology that’s stopping me from flying to work. What’s more of an issue (apart from cost and environmental damage) is that flying is difficult. The slightest problem like an engine stall or bump with another vehicle tends to be fatal. So the reason I don’t fly to work is largely down to me not having learnt how to fly.

The zone of possibility

In 2010 Kathryn Dirkin studied how three professors taught using the same online learning environment, and found that they were very different. Not something that will surprise many people, but the paper (which unfortunately is still behind a paywall) is worth a read for the details of the analysis. What I liked from her conclusions was that how someone teaches online depends on the intersection of their knowledge of the content, beliefs about how it should be taught and understanding technology. She calls this intersection the zone of possibility. As with the flying car the online learning experience we want may already be technologically possible, we just need to learn how to fly it (and consider the cost and effect on the environment).

I have been thinking about Dirkin’s zone of possibility over the last few weeks. How can it be increased? Should it be increased? On the latter, let’s just say that if technology can enhance education, then yes it should (but let’s also be mindful about the costs and impact on the environment).

So how, as a learning technologist, to increase this intersection of content knowledge, pedagogy and understanding of technology? Teachers’ content knowledge I guess is a given, nothing that a learning technologist can do to change that. Also, I have come to the conclusion that pedagogy is off limits. No technology-as-a-Trojan-horse for improving pedagogy, please, that just doesn’t work. It’s not that pedagogic approaches can’t or don’t need to be improved, but conflating that with technology seems counter productive.  So that’s left me thinking about teachers’ (and learners’) understanding of technology. Certainly, the other week when I was playing with audio & video codecs and packaging formats that would work with HTML5 (keep repeating H264  and AAC in MPEG-4) I was aware of this. There seems to be three viable approaches: increase digital literacy, tools to simplify the technology and use learning technologists as intermediaries between teachers and technology. I leave it at that because it is not a choice of which, but of how much of each can be applied.

Does technology or pedagogy lead?

In terms of defining the”zone of possibility” I think that it is pretty clear that technology leads. Content knowledge and pedagogy change slowly compared to technology. I think that rate of change is reflected in most teachers understanding of those three factors. I would go as far as to say that it is counterfactual to suggest that our use of technology in HE has been led by anything other than technology. Innovation in educational technology usually involves exploration of new possibilities opened up by technological advances, not other factors. But having acknowledged this, it should also be clear that having explored the possibilities, a sensible choice of what to use when teaching will be based on pedagogy (as well as cost and the effect on the environment).

The post Flying cars, digital literacy and the zone of possibility appeared first on Sharing and learning.

Supporting Dataset Descriptions in the Life Sciences

Seminar talk given at the EBI on 5 April 2017. Abstract: Machine processable descriptions of datasets can help make data more FAIR; that is Findable, Accessible, Interoperable, and Reusable. However, there are a variety of metadata profiles for describing datasets, some specific to the life sciences and others more generic in their focus. Each profile has […]

Seminar talk given at the EBI on 5 April 2017.

Abstract: Machine processable descriptions of datasets can help make data more FAIR; that is Findable, Accessible, Interoperable, and Reusable. However, there are a variety of metadata profiles for describing datasets, some specific to the life sciences and others more generic in their focus. Each profile has its own set of properties and requirements as to which must be provided and which are more optional. Developing a dataset description for a given dataset to conform to a specific metadata profile is a challenging process.

In this talk, I will give an overview of some of the dataset description specifications that are available. I will discuss the difficulties in writing a dataset description that conforms to a profile and the tooling that I’ve developed to support dataset publishers in creating metadata description and validating them against a chosen specification.

Reflections on a little bit of open education (TL;DR: it works).

We are setting up a new honours degree programme which will involve use of online resources for work based blended learning. I was asked to demonstrate some the resources and approaches that might be useful. This is one of the quick examples that I was able to knock up(*) and some reflections on how Open … Continue reading Reflections on a little bit of open education (TL;DR: it works).

The post Reflections on a little bit of open education (TL;DR: it works). appeared first on Sharing and learning.

We are setting up a new honours degree programme which will involve use of online resources for work based blended learning. I was asked to demonstrate some the resources and approaches that might be useful. This is one of the quick examples that I was able to knock up(*) and some reflections on how Open Education helped me. By the way, I especially like the last bit about “open educational practice”. So if the rest bores you, just skip to the end.

(*Disclaimer: this really is a quickly-made example, it’s in no way representative of the depth of content we will aim for in the resources we use.)

Making the resource

I had decided that I wanted to show some resources that would be useful for our first year, first semester Praxis course. This course aims to introduce students to some of the skills they will need to study computer science, ranging from appreciating the range of topics they will study to being able to use our Linux systems, from applying study skills to understanding some requirements of academic writing. I was thinking that much of this would be fairly generic and must be covered by a hundred and one existing resources when  I saw this tweet:

That seemed to be in roughly the right area, so I took a look at the University of Nottingham’s HELM Open site and found an Introduction to Referencing. Bingo. The content seemed appropriate, but I wasn’t keen on a couple of things. First, breaking up the video in 20sec chunks I fear would mean the student spend more time ‘interacting’ with the Next-> button than thinking about the content. Second, it seems a little bit too delivery oriented, I would like the student to be a little more actively engaged.

I noticed there is a little download arrow on each page which let me download the video. So I downloaded them all and used OpenShot to string them together into one file. I exported this and used the h5p WordPress plugin to show how it could be combined with some interactive elements and hosted on a WordPress site with the hypothes.is annotation plugin, to get this:

The remixed resource: on the top left is the video, below that some questions to prompt the students to pay attention to the most significant points, and on the right the hypothes.is pop-out for discussion.

How openness helps

So that was easy enough, a demo of the type of resource we might produce, created in less than an afternoon. How did “openness” help make it easy.

Open licensing and the 5Rs

David Wiley’s famous 5Rs define open licences as those that let you  Reuse, Revise, Remix, Retain and Redistribute learning resources. The original resource was licensed as CC:BY-NC and so permitted all of these actions. How did they help?

Reuse: I couldn’t have produced the video from scratch without learning some new skills or having sizeable budget, and having much more time.

Revise: I wasn’t happy with the short video / many page turns approach, but was  able to revise the video to make it play all the way through in one go.

Remix: The video was then added to some formative exercises, and discussion facility added.

Retain: in order for us to rely on these resources when teaching we need to be sure that the resource remains available. That means taking responsibility keeping it available. Hence we’ll be hosting it on a site we control.

Redistribute: we will make our version available to other. This isn’t just about “paying forward”, it’s about the benefits that working in an open network being, see the discussion about nebulous open education below.

One point to make here: the licence has a Non-Commercial restriction. I understand why some people favour this, but imagine if I were an independent consultant brought in to do this work, and charged for it. Would I then be able to use the HELM material? The recent case about a commercial company charging to duplicate CC-licensed material for schools, which a US judge ruled within the terms of the licence might apply, but photocopying seems different to remixing. To my mind, the NC clause just complicates things too much.

Open standards, and open source

I hadn’t heard much about David Wiley’s ALMS framework for technical choices to facilitate openness (same page as before, just scroll a bit further) but it deals directly with issues I am very familiar with. Anyone who thinks about it will realise that a copy-protected PDF is not open no matter what the licence on it says. The ALMS framework breaks the reasoning for this down to four aspects: Access to editing tools, Level of expertise required, Meaningfully editable, Self sources. Hmmm. Maybe sometimes it’s clearer not to force category names into acronyms? Anyway, here’s how these helped.

Self-sourced, meaning the distribution format is the source code. This is especially relevant as the reason HELM sent the tweet that alerted me to their materials was that they are re-authoring material from Flash to HTML5. Aside from modern browser support, one big advantage of them doing this is that instead of having an impenetrable SWF package I had access to the assets that made the resource, notably the video clips.

Meaningfully editable: that access to the assets meant that I could edit the content, stringing the videos together, copying and pasting text from the transcript to use as questions.

Level of expertise required: I have found all the tools and services used (OpenShot, H5P, hypothes.is, WordPress) relatively easy to use, however some experience is required, for example to be familiar with various plugins available for WordPress and how to install them. Video editing in particular takes some expertise. It’s probably something that most people don’t do very often (I don’t).  Maybe the general level of digital literacy level we should now aim for is one where people are familiar with photo and video editing tools as well as text oriented word processing and presentation tools. However, I’m inclined to think that the details of using the H264 video codec and AAC audio codec, packaged in a MPEG-4 Part 14 container (compare and contrast with VP9 and ogg vorbis packaged in a profile of Matroska) should remain hidden from most people. Fortunately, standardisation means that the number of options is less than it would otherwise be, and it was possible to find many pages on the web with guidance on the browser compatibility of these options (MP4 and WebM respectively).

Access to editing tools, where access starts with low cost. All the tools used were free, most were open source, and all ran on Ubuntu (most can also run on other platforms).

It’s notable that all these ultimately involve open source software and open standards, and work especially well when then “open” for open standards includes free to implement. That complicated bit around MP4 & WebM video formats, that comes about because royalty requirements for those implementing MP4.

Open educational practice: nebulous but important.

Open education includes but is more than open education resources, open content, open licensing and open standards. It also means talking about what we do. It means that I found out about HELM because they were openly tweeting about their resources. I think that is how I learnt about nearly all the tools discussed here ina similar manner. Yes, “pimping your stuff” is importantly open. Open education also means asking questions and writing how-to articles that let non-experts like me deal with complexities like video encoding.

There’s a deeper open education at play here as well. See that resource from HELM that I started with? It started life in the RLO CETL, i.e. in a publicly funded initiative, now long gone. And the reason I and others in the UKHE know about Creative Commons and David Wiley’s analysis of open content, that largely comes down to #UKOER, again a publicly  funded initiative. UKOER and the stuff about open standards and open source was supported by Jisc, publicly funded. Alumni from these initiatives are to be found all over UKHE, through which these initiatives continue to be crucially important in building our capability and capacity to support learners in new and innovative settings.

 

The post Reflections on a little bit of open education (TL;DR: it works). appeared first on Sharing and learning.

Shared WordPress archive for different post types

In a WordPress plugin I have custom post types for different types of publication: books, chapters, papers, presentations, reports. I want one single archive of all of these publications. I know that the theme template hierarchy allows templates with the pattern archive-$posttype.php, so  I tried setting the slug for all the custom post types to ‘presentations’. WordPress … Continue reading Shared WordPress archive for different post types

The post Shared WordPress archive for different post types appeared first on Sharing and learning.

In a WordPress plugin I have custom post types for different types of publication: books, chapters, papers, presentations, reports. I want one single archive of all of these publications.

I know that the theme template hierarchy allows templates with the pattern archive-$posttype.php, so  I tried setting the slug for all the custom post types to ‘presentations’. WordPress doesn’t like that.  So what I did was set the slug for one of the publication custom post types to ‘presentations’, that gives me a /presentations/ archive for that custom post type(1). I then edited the archive.php file to use a different  template parts for custom post types(2):

<?php $cpargs = array('_builtin' => False,
				  'exclude_from_search' => False);
	$custom_post_types = get_post_types( $cpargs, 'names', 'and' );
	if ( is_post_type_archive( $custom_post_types ) ) {
		get_template_part( 'archive-publication' );
	} else {
		get_template_part( 'archive-default' );
	}  
?>

See anything wrong with this approach? Any comments on how better to do this would be welcome.

Notes:
  1. 1 could edit the .htaccess file to redirect the /books/, /chapters/ …etc archives to /publications/, which would be neater in some ways but would make setting up the theme a bit of a faff.
  2. Yes, the code gives all the custom post types with an archive the same archive. That’s fixable if you make the array of post types for which you want a shared archive manually.

The post Shared WordPress archive for different post types appeared first on Sharing and learning.

Shared WordPress archive for different post types

In a WordPress plugin I have custom post types for different types of publication: books, chapters, papers, presentations, reports. I want one single archive of all of these publications. I know that the theme template hierarchy allows templates with the pattern archive-$posttype.php, so  I tried setting the slug for all the custom post types to ‘presentations’. WordPress … Continue reading Shared WordPress archive for different post types

The post Shared WordPress archive for different post types appeared first on Sharing and learning.

In a WordPress plugin I have custom post types for different types of publication: books, chapters, papers, presentations, reports. I want one single archive of all of these publications.

I know that the theme template hierarchy allows templates with the pattern archive-$posttype.php, so  I tried setting the slug for all the custom post types to ‘presentations’. WordPress doesn’t like that.  So what I did was set the slug for one of the publication custom post types to ‘presentations’, that gives me a /presentations/ archive for that custom post type(1). I then edited the archive.php file to use a different  template parts for custom post types(2):

<?php $cpargs = array('_builtin' => False,
				  'exclude_from_search' => False);
	$custom_post_types = get_post_types( $cpargs, 'names', 'and' );
	if ( is_post_type_archive( $custom_post_types ) ) {
		get_template_part( 'archive-publication' );
	} else {
		get_template_part( 'archive-default' );
	}  
?>

See anything wrong with this approach? Any comments on how better to do this would be welcome.

Notes:
  1. 1 could edit the .htaccess file to redirect the /books/, /chapters/ …etc archives to /publications/, which would be neater in some ways but would make setting up the theme a bit of a faff.
  2. Yes, the code gives all the custom post types with an archive the same archive. That’s fixable if you make the array of post types for which you want a shared archive manually.

The post Shared WordPress archive for different post types appeared first on Sharing and learning.

Requirements for online exam system

Some time back we started looking for an online exam system for some of our computer science exams. Part of the process was to list a set of “acceptance criteria,” i.e. conditions that any system we looked at had to meet. One of my aims in writing these was to  avoid chasing after some mythical … Continue reading Requirements for online exam system

The post Requirements for online exam system appeared first on Sharing and learning.

Some time back we started looking for an online exam system for some of our computer science exams. Part of the process was to list a set of “acceptance criteria,” i.e. conditions that any system we looked at had to meet. One of my aims in writing these was to  avoid chasing after some mythical ‘perfect’ system, and focus on finding one that would meet our needs. Although the headings below differ, as a system for high stakes assessment the overarching requirements were security, reliability, scalability, which are reflected below.

Having these criteria were useful in reaching a consensus decision when there was no ‘perfect’ system.

Security:

  • Only authorised staff (+ external examiners) to have access before exam time.
  • Only authorised staff and students to have access during exams.
  • Only authorised staff (+ external examiners) to have access to results.
  • Authorised staff and external examiners  to have only the level of access they need, no more.
  • Software must be kept up-to-date and patched in a timely fashion
  • Must track and report all access attempts
  • Must not rely on security by obscurity.
  • Secure access must not depend on location.

Audit:

  • Provide suitable access to internal checkers and external examiners.
  • Logging of changes to questions and exams would  be desirable.
  • It must be possible to set a point after which exams cannot be changed (e.g. once they are passed by checkers)
  • Must be able to check marking (either exam setter or other individual), i.e. provide clear reports on how each question was answered by each candidate.
  • Must be possible to adjust marking/remark if an error is found after the exam (e.g. if a mistake was made in setting the correct option for mcq, or if question was found to be ambiguous or too hard)

Pedagogy:

  • Must should be possible to reproduce content of previous CS electronic exams in similar or better format [this one turned out not to be  important]
  • Must be able to decide how many points to assign to each question
  • Desirable to have provision for alternate answers or insignificant difference in answers (e.g.  y=a*b, y=b*a)
  • Desirable to reproduce style of standard HW CS exam papers, i.e. four potentially multipart questions, with student able to choose which 3 to answer
  • Desirable to be possible to provide access to past papers on formative basis
  • Desirable to support formative assessment with feedback to students
  • Must be able to remove access to past papers if necessary.
  • Students should be able to practice with same (or very similar) system prior to exam
  • Desirable to be able to open up access to a controlled list of websites and tools (c.f. open book exams)
  • Should be able to use mathematical symbols in questions and answers, including student entered text answers.

Operational

  • Desirable to have programmatic transfer of staff information to assessment system (i.e. to know who has what role for each exam)
  • Must be able to transfer student information from student information system to assessment system (who sits which exam and at which campus).
  • Desirable to be able to transfer study requirements from student information system to assessment system (e.g. who gets extra time in exams)
  • Programmatic transfer student results from assessment system to student record systems or VLE (one is required)
  • Desirable to support import/export of tests via QTI.
  • Integration with VLE for access to past papers, mock exams, formative assessment in general (e.g. IMS LTI)
  • Hardware & software requirements for test taking must be compatible with PCs we have (at all campuses and distance learning partners).
  • Set up requirements for labs in which assessments are taken must be within capabilities of available technical staff at relevant centre (at all campuses and distance learning partners).
  • Lab infrastructure* and servers must be able to operate under load of full class logging in simultaneously (* at all campuses and distance learning partners)
  • Must have adequate paper back up at all stages, at all locations
  • Must be provision for study support exam provision (e.g. extra time for some students)
  • Need to know whether there is secure API access to responses.
  • API documentation must be open and response formats open and flexible.
  • Require support helpline / forum / community.
  • Timing of release of encryption key

Other

  • Costs. Clarify how many students would be involved, what this would cost.

 

The post Requirements for online exam system appeared first on Sharing and learning.