Reflective learning logs in computer science

Do you have any comments, advice or other pointers on how to guide students to maintaining high quality reflective learning logs? Context: I teach part of a first year computer science / information systems course on Interactive Systems.  We have some assessed labs where we set the students fixed tasks to work on, and there is … Continue reading Reflective learning logs in computer science

The post Reflective learning logs in computer science appeared first on Sharing and learning.

Do you have any comments, advice or other pointers on how to guide students to maintaining high quality reflective learning logs?

Context: I teach part of a first year computer science / information systems course on Interactive Systems.  We have some assessed labs where we set the students fixed tasks to work on, and there is coursework. For the coursework the students have to create an app of  their own devising. They start with something simple (think of it as a minimum viable product) but then extend it to involve interaction with the environment (using their device’s sensors), other people, or other systems. Among of the objectives of the course are that: students learn to take responsibility for their own learning,  to appreciate their own strengths and weaknesses, and what is possible within time constraints. We also want students to gain experience in conceiving, designing and implementing an interactive app, and we want them to reflect on and provide evidence about the effectiveness of the approach they took.

Part of the assessment for this course is by way of the students keeping reflective learning logs, which I am now marking.  I am trying to think how I could better guide the students to write substantial, analytic posts (including how to encourage engagement from those students who don’t see the point to keeping a log).

Guidance and marking criteria

Based on those snippets of feedback that I found myself repeating over and over, here’s what I am thinking to provide as guidance to next year’s students:

  • The learning log should be filled in whenever you work on your app, which should be more than just during the lab sessions.
  • For set labs entries with the following structure will help bring out the analytic elements:
    • What was I asked to do?
    • What did I anticipate would be difficult?
    • What did I find to be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students?
    • What would I do differently if I had to do this again?
  • For coursework entries the structure can be amended to:
    • What did I do?
    • What did I find to be difficult? How did this compare to what I anticipated would be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students on my work so far?
    • What would I do differently if I had to do this again?
    • What do I plan to do next?
    • What do I anticipate to be difficult?
    • How do I plan to overcome outstanding issues and expected difficulties.

These reflective learning logs are marked out of 5 in the middle of the course and again at the end (so represent 10% of the total course mark), according to the following criteria

  1. contributions: No entries, or very brief (i.e. one or two sentences) entries only: no marks. Regular entries, more than once per week, with substantial content: 2 marks.
  2. analysis: Brief account of events only or verbatim repetition of notes: no marks. Entries which include meaningful plans with reflection on whether they worked; analysis of problems and how they were solved; and evidence of re-evaluation plans as a result of what was learnt during the implementation and/or as a result of feedback from others: 3 marks.
  3. note: there are other ways of doing really well or really badly than are covered above.

Questions

Am I missing anything from the guidance and marking criteria?

How can I encourage students who don’t see the point of keeping a reflective learning log? I guess some examples of where they are important with respect to professional practice in computing.

These are marked twice, using rubrics in Blackboard, in the middle of the semester and at the end. Is there any way of attaching two grading rubrics to the same assessed log in Blackboard? Or a workaround to set the same blog as two graded assignments?

Answers on a postcard… Or the comments section below. Or email.

The post Reflective learning logs in computer science appeared first on Sharing and learning.

Reflective learning logs in computer science

Do you have any comments, advice or other pointers on how to guide students to maintaining high quality reflective learning logs? Context: I teach part of a first year computer science / information systems course on Interactive Systems.  We have some assessed labs where we set the students fixed tasks to work on, and there is … Continue reading Reflective learning logs in computer science

The post Reflective learning logs in computer science appeared first on Sharing and learning.

Do you have any comments, advice or other pointers on how to guide students to maintaining high quality reflective learning logs?

Context: I teach part of a first year computer science / information systems course on Interactive Systems.  We have some assessed labs where we set the students fixed tasks to work on, and there is coursework. For the coursework the students have to create an app of  their own devising. They start with something simple (think of it as a minimum viable product) but then extend it to involve interaction with the environment (using their device’s sensors), other people, or other systems. Among of the objectives of the course are that: students learn to take responsibility for their own learning,  to appreciate their own strengths and weaknesses, and what is possible within time constraints. We also want students to gain experience in conceiving, designing and implementing an interactive app, and we want them to reflect on and provide evidence about the effectiveness of the approach they took.

Part of the assessment for this course is by way of the students keeping reflective learning logs, which I am now marking.  I am trying to think how I could better guide the students to write substantial, analytic posts (including how to encourage engagement from those students who don’t see the point to keeping a log).

Guidance and marking criteria

Based on those snippets of feedback that I found myself repeating over and over, here’s what I am thinking to provide as guidance to next year’s students:

  • The learning log should be filled in whenever you work on your app, which should be more than just during the lab sessions.
  • For set labs entries with the following structure will help bring out the analytic elements:
    • What was I asked to do?
    • What did I anticipate would be difficult?
    • What did I find to be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students?
    • What would I do differently if I had to do this again?
  • For coursework entries the structure can be amended to:
    • What did I do?
    • What did I find to be difficult? How did this compare to what I anticipated would be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students on my work so far?
    • What would I do differently if I had to do this again?
    • What do I plan to do next?
    • What do I anticipate to be difficult?
    • How do I plan to overcome outstanding issues and expected difficulties.

These reflective learning logs are marked out of 5 in the middle of the course and again at the end (so represent 10% of the total course mark), according to the following criteria

  1. contributions: No entries, or very brief (i.e. one or two sentences) entries only: no marks. Regular entries, more than once per week, with substantial content: 2 marks.
  2. analysis: Brief account of events only or verbatim repetition of notes: no marks. Entries which include meaningful plans with reflection on whether they worked; analysis of problems and how they were solved; and evidence of re-evaluation plans as a result of what was learnt during the implementation and/or as a result of feedback from others: 3 marks.
  3. note: there are other ways of doing really well or really badly than are covered above.

Questions

Am I missing anything from the guidance and marking criteria?

How can I encourage students who don’t see the point of keeping a reflective learning log? I guess some examples of where they are important with respect to professional practice in computing.

These are marked twice, using rubrics in Blackboard, in the middle of the semester and at the end. Is there any way of attaching two grading rubrics to the same assessed log in Blackboard? Or a workaround to set the same blog as two graded assignments?

Answers on a postcard… Or the comments section below. Or email.

The post Reflective learning logs in computer science appeared first on Sharing and learning.

Reflective learning logs in computer science

Do you have any comments, advice or other pointers on how to guide students to maintaining high quality reflective learning logs? Context: I teach part of a first year computer science / information systems course on Interactive Systems.  We have some assessed labs where we set the students fixed tasks to work on, and there is … Continue reading Reflective learning logs in computer science

The post Reflective learning logs in computer science appeared first on Sharing and learning.

Do you have any comments, advice or other pointers on how to guide students to maintaining high quality reflective learning logs?

Context: I teach part of a first year computer science / information systems course on Interactive Systems.  We have some assessed labs where we set the students fixed tasks to work on, and there is coursework. For the coursework the students have to create an app of  their own devising. They start with something simple (think of it as a minimum viable product) but then extend it to involve interaction with the environment (using their device’s sensors), other people, or other systems. Among of the objectives of the course are that: students learn to take responsibility for their own learning,  to appreciate their own strengths and weaknesses, and what is possible within time constraints. We also want students to gain experience in conceiving, designing and implementing an interactive app, and we want them to reflect on and provide evidence about the effectiveness of the approach they took.

Part of the assessment for this course is by way of the students keeping reflective learning logs, which I am now marking.  I am trying to think how I could better guide the students to write substantial, analytic posts (including how to encourage engagement from those students who don’t see the point to keeping a log).

Guidance and marking criteria

Based on those snippets of feedback that I found myself repeating over and over, here’s what I am thinking to provide as guidance to next year’s students:

  • The learning log should be filled in whenever you work on your app, which should be more than just during the lab sessions.
  • For set labs entries with the following structure will help bring out the analytic elements:
    • What was I asked to do?
    • What did I anticipate would be difficult?
    • What did I find to be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students?
    • What would I do differently if I had to do this again?
  • For coursework entries the structure can be amended to:
    • What did I do?
    • What did I find to be difficult? How did this compare to what I anticipated would be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students on my work so far?
    • What would I do differently if I had to do this again?
    • What do I plan to do next?
    • What do I anticipate to be difficult?
    • How do I plan to overcome outstanding issues and expected difficulties.

These reflective learning logs are marked out of 5 in the middle of the course and again at the end (so represent 10% of the total course mark), according to the following criteria

  1. contributions: No entries, or very brief (i.e. one or two sentences) entries only: no marks. Regular entries, more than once per week, with substantial content: 2 marks.
  2. analysis: Brief account of events only or verbatim repetition of notes: no marks. Entries which include meaningful plans with reflection on whether they worked; analysis of problems and how they were solved; and evidence of re-evaluation plans as a result of what was learnt during the implementation and/or as a result of feedback from others: 3 marks.
  3. note: there are other ways of doing really well or really badly than are covered above.

Questions

Am I missing anything from the guidance and marking criteria?

How can I encourage students who don’t see the point of keeping a reflective learning log? I guess some examples of where they are important with respect to professional practice in computing.

These are marked twice, using rubrics in Blackboard, in the middle of the semester and at the end. Is there any way of attaching two grading rubrics to the same assessed log in Blackboard? Or a workaround to set the same blog as two graded assignments?

Answers on a postcard… Or the comments section below. Or email.

The post Reflective learning logs in computer science appeared first on Sharing and learning.

ISWC 2016 Trip Report

It has now been almost two months since ISWC 2016 where I was the Resources Track chair with Marta Sabou. This has given me time to reflect on the conference, in between a hectic schedule of project meetings, workshops, conferences, and a PhD viva. The most enjoyable part of the conference for me was the […]

It has now been almost two months since ISWC 2016 where I was the Resources Track chair with Marta Sabou. This has given me time to reflect on the conference, in between a hectic schedule of project meetings, workshops, conferences, and a PhD viva.

The most enjoyable part of the conference for me was the CoLD Workshop Debate on the State of Linked Data. The workshop organisers had arranged for six prominent proponents of the Linked Data to argue that we have failed and that Linked Data will die away.

  1. Ruben Verborgh argued that Linked Data will be destroyed by the need to centralise data, poor infrastructure, and the research community. (Aside: There was certainly concern on the final point as there were only three females in the room.)
  2. Axel Polleres took the moto, “Let’s make RDF great again!” Axel’s central argument was around the fact that most open data is actually published in CSV format and lots can be achieved with 3* open data.

  3. Paul Groth argued that we should be concentrating on making our data processable by machines. What we currently have is a format that is aimed at both but satisfies neither.

  4. Chris Bizer covered cost incentives. While there is an incentive to provide some basic schema markup on pages, i.e. getting picked up by search engines, there is no financial incentive to provide the links to other resources. My take on this is that there is a disincentive as it would take traffic away from your (eCommerce) site and therefore lose you revenue.
  5. Avi Bernstein then did a fantastic impression of a Wee Free minister and telling us that we had all sinned and were following the wrong path; all fire and brimstone.
  6. Juan Reutter argued that we needed to provide a workable ecosystem.

So the question is, has the Linked Data community failed? I think the debate highlighted that the community had made many contributions in a short space of time but that it is time to get this into the main stream. Perhaps our community is not the best for doing the required sales job, but we have had some success, e.g. EBI RDF platform, Open PHACTS Drug Discovery Platform, BBC Olympic Web Site.

The main conference was underpinned by three fantastic and varied keynotes. First was Kathleen McKeown who gave us insights into the extraction of knowledge from different forms of text. Second was Christian Bizer who’s main message was that we as a community need to take structured data in whatever form it comes; just like search engines have exploited metadata and page structure for a long time. Finally was Hiroaki Kitano from the Sony Corporation. This has got to be the densest keynote I have ever heard with more ideas per minute than a dance tune has beats. His challenge to the community was that we should aim to have an AI system win a scientific nobel prize by 2050. The system should develop a hypothesis, test it, and generate a ground breaking conclusion worthy of the prize.

There were many great and varied talks during the conference. It really is worth looking through the programme to find those of interest to you (all the papers are linked and available). As ever the poster and demo session, advertised in the minute madness session, demonstrated the breadth and cutting edge work going on in the community. As did the lightning talk session.

The final day of the conference was particularly weird for me. As the chair of a session I ended up sharing a bottle of fine Italian wine with a presenter during his talk, it would have been rude not to; and experiencing an earthquake during a presentation on an ontology for modelling the soil beneath our cities, in particular the causes of damage to that soil.

The conference afforded some opportunities for fun as well. A few of the organising committee managed to get visit the k-computer; the worlds fifth fastest super-computer which is cooled with water. The computer was revealed in a very James Bond, “Now I’m going to have to kill you!” reveal of the evil enemy’s master plan. There was also a highly entertaining Samurai sword fighting demonstration during the conference banquet.

During the conference, my Facebook feed was filled with exclamations about the complexity of the toilets. Following the conference, it was filled with exclamations of returning to lands of uncivilised toilets. Make of this what you will.

HCLS Tutorial at SWAT4LS 2016

On 5 December 2016 I presented a tutorial [1] on the Heath Care and Life Sciences Community Profile (HCLS Datasets) at the 9th International Semantic Web Applications and Tools for the Life Sciences Conference (SWAT4LS 2016). Below you can find the slides I presented. The 61 metadata properties from 18 vocabularies reused in the HCLS Community […]

On 5 December 2016 I presented a tutorial [1] on the Heath Care and Life Sciences Community Profile (HCLS Datasets) at the 9th International Semantic Web Applications and Tools for the Life Sciences Conference (SWAT4LS 2016). Below you can find the slides I presented.

The 61 metadata properties from 18 vocabularies reused in the HCLS Community Profile are available in this spreadsheet (.ods).

[1] M. Dumontier, A. J. G. Gray, and S. M. Marshall, “Describing Datasets with the Health Care and Life Sciences Community Profile,” in Semantic Web Applications and Tools for Life Sciences (SWAT4LS 2016), Amsterdam, The Netherlands, 2016.
[Bibtex]
@InProceedings{Gray2016SWAT4LSTutorial,
abstract = {Access to consistent, high-quality metadata is critical to finding, understanding, and reusing scientific data. However, while there are many relevant vocabularies for the annotation of a dataset, none sufficiently captures all the necessary metadata. This prevents uniform indexing and querying of dataset repositories. Towards providing a practical guide for producing a high quality description of biomedical datasets, the W3C Semantic Web for Health Care and the Life Sciences Interest Group (HCLSIG) identified Resource Description Framework (RDF) vocabularies that could be used to specify common metadata elements and their value sets. The resulting HCLS community profile covers elements of description, identification, attribution, versioning, provenance, and content summarization. The HCLS community profile reuses existing vocabularies, and is intended to meet key functional requirements including indexing, discovery, exchange, query, and retrieval of datasets, thereby enabling the publication of FAIR data. The resulting metadata profile is generic and could be used by other domains with an interest in providing machine readable descriptions of versioned datasets. The goal of this tutorial is to explain elements of the HCLS community profile and to enable users to craft and validate descriptions for datasets of interest.},
author = {Michel Dumontier and Alasdair J. G. Gray and M. Scott Marshall},
title = {Describing Datasets with the Health Care and Life Sciences Community Profile},
OPTcrossref = {},
OPTkey = {},
booktitle = {Semantic Web Applications and Tools for Life Sciences (SWAT4LS 2016)},
year = {2016},
OPTeditor = {},
OPTvolume = {},
OPTnumber = {},
OPTseries = {},
OPTpages = {},
month = dec,
address = {Amsterdam, The Netherlands},
OPTorganization = {},
OPTpublisher = {},
note = {(Tutorial)},
url = {http://www.swat4ls.org/workshops/amsterdam2016/tutorials/t2/},
OPTannote = {}
}

HCLS Tutorial at SWAT4LS 2016

On 5 December 2016 I presented a tutorial [1] on the Heath Care and Life Sciences Community Profile (HCLS Datasets) at the 9th International Semantic Web Applications and Tools for the Life Sciences Conference (SWAT4LS 2016). Below you can find the slides I presented. The 61 metadata properties from 18 vocabularies reused in the HCLS Community […]

On 5 December 2016 I presented a tutorial [1] on the Heath Care and Life Sciences Community Profile (HCLS Datasets) at the 9th International Semantic Web Applications and Tools for the Life Sciences Conference (SWAT4LS 2016). Below you can find the slides I presented.

The 61 metadata properties from 18 vocabularies reused in the HCLS Community Profile are available in this spreadsheet (.ods).

[1] M. Dumontier, A. J. G. Gray, and S. M. Marshall, “Describing Datasets with the Health Care and Life Sciences Community Profile,” in Semantic Web Applications and Tools for Life Sciences (SWAT4LS 2016), Amsterdam, The Netherlands, 2016.
[Bibtex]
@InProceedings{Gray2016SWAT4LSTutorial,
abstract = {Access to consistent, high-quality metadata is critical to finding, understanding, and reusing scientific data. However, while there are many relevant vocabularies for the annotation of a dataset, none sufficiently captures all the necessary metadata. This prevents uniform indexing and querying of dataset repositories. Towards providing a practical guide for producing a high quality description of biomedical datasets, the W3C Semantic Web for Health Care and the Life Sciences Interest Group (HCLSIG) identified Resource Description Framework (RDF) vocabularies that could be used to specify common metadata elements and their value sets. The resulting HCLS community profile covers elements of description, identification, attribution, versioning, provenance, and content summarization. The HCLS community profile reuses existing vocabularies, and is intended to meet key functional requirements including indexing, discovery, exchange, query, and retrieval of datasets, thereby enabling the publication of FAIR data. The resulting metadata profile is generic and could be used by other domains with an interest in providing machine readable descriptions of versioned datasets. The goal of this tutorial is to explain elements of the HCLS community profile and to enable users to craft and validate descriptions for datasets of interest.},
author = {Michel Dumontier and Alasdair J. G. Gray and M. Scott Marshall},
title = {Describing Datasets with the Health Care and Life Sciences Community Profile},
OPTcrossref = {},
OPTkey = {},
booktitle = {Semantic Web Applications and Tools for Life Sciences (SWAT4LS 2016)},
year = {2016},
OPTeditor = {},
OPTvolume = {},
OPTnumber = {},
OPTseries = {},
OPTpages = {},
month = dec,
address = {Amsterdam, The Netherlands},
OPTorganization = {},
OPTpublisher = {},
note = {(Tutorial)},
url = {http://www.swat4ls.org/workshops/amsterdam2016/tutorials/t2/},
OPTannote = {}
}

XKCD or OER for critical thinking

I teach half a course on Critical Thinking to 3rd year Information Systems students. A colleague takes the first half which covers statistics. I cover how science works including the scientific method, experimental design, how to read a research papers, how to spot dodgy media reports of science and pseudoscience, and reproducibility in science; how to argue, … Continue reading XKCD or OER for critical thinking

I teach half a course on Critical Thinking to 3rd year Information Systems students. A colleague takes the first half which covers statistics. I cover how science works including the scientific method, experimental design, how to read a research papers, how to spot dodgy media reports of science and pseudoscience, and reproducibility in science; how to argue, which is mostly how to spot logical fallacies; and a little on cognitive development. One the better things about teaching on this course is that a lot of it is covered by XKCD, and that XKCD is CC licensed. Open Education Resources can be fun.

how scientists think

[explain]

hypothesis testing

Hell, my eighth grade science class managed to conclusively reject it just based on a classroom experiment. It's pretty sad to hear about million-dollar research teams who can't even manage that.

[explain]

Blind trials

[explain]

Interpreting statistics

[explain]

p hacking

[explain]

Confounding variables

There are also a lot of global versions of this map showing traffic to English-language websites which are indistinguishable from maps of the location of internet users who are native English speakers

[explain]

Extrapolation

[explain]

[explain]

Confirmation bias in information seeking

[explain]

[explain]

undistributed middle

[explain]

post hoc ergo propter hoc

Or correlation =/= causation.

He holds the laptop like that on purpose, to make you cringe.

[explain]

[explain]

Bandwagon Fallacy…

…and fallacy fallacy

[explain]

Diversity and inclusion

[explain]

Seminar: Developing a simple RDF graph library

Date: 11:15, 14 November

Venue: F.17. Colin Maclaurin Building, Heriot-Watt University

Title: Developing a simple RDF graph library

Speaker: Rob Stewart, Heriot-Watt University

Abstract: In this talk I shall present the design and the implementation details of a simple Haskell library for working with RDF data. The library supports parsing and pretty printing for the XML/Turtle/NTriple RDF serialisation formats, and graph querying. It has multiple in-memory representations for RDF graphs, exposed as a parameter to the programmer to meet their application specific needs.

The presentation will cover: the API, how the various RDF graph representations are implemented internally, the W3C testsuite that this library uses to ensure W3C RDF spec conformance, and the library’s performance benchmarking suite.

LRMI at #DCMI16 Metadata Summit, Copenhagen

I was in Copenhagen last week, at the Dublin Core Metadata Initiative 2016 conference, where I ran a workshop entitled “Building on Schema.org to describe learning resources” (as one of my colleagues pointed out, thinking of the snappy title never quite happened). Here’s a quick overview of it. There were three broad parts to the workshop: presentations … Continue reading LRMI at #DCMI16 Metadata Summit, Copenhagen

I was in Copenhagen last week, at the Dublin Core Metadata Initiative 2016 conference, where I ran a workshop entitled “Building on Schema.org to describe learning resources” (as one of my colleagues pointed out, thinking of the snappy title never quite happened). Here’s a quick overview of it.

There were three broad parts to the workshop: presentations on the background organisations and technology; presentations on how LRMI is being used; and a workshop where attendees got to think about what could be next for LRMI.

Fundamentals of Schema.org and LRMI

An introduction to Schema.org (Richard Wallis)

A brief history of Schema.org, fast becoming a de facto vocabulary for structured web data for sharing with search engines and others to understand interpret and load into their knowledge graphs. Whist addressing the issue of simple structured markup across the web it is also through its extension capabilities facilitating the development of sector specific enhancement that will be widely understood.

An Introduction to LRMI (Phil Barker)

A short introduction to the Learning Resource Metadata Initiative, originally a project which developed a common metadata framework for describing learning resources on the web. LRMI metadata terms have been added to Schema.org. The task group currently works to support those terms as a part of Schema.org and as a DCMI community specification.

[

Use of LRMI

Overview of LRMI in the wild  (Phil Barker)

The results of a series of case studies looking at initial implementations are summarised, showing that LRMI metadata is used in various ways not all of which are visible to the outside worlds. Estimates of how many organisations are using LRMI properties in publicly available websites and pages, and some examples are shown.

The Learning Registry and LRMI (Steve Midgley)

The learning registry is a new approach to capturing, connecting and sharing data about learning resources available online with the goal of making it easier for educators and students to access the rich content available in our ever-expanding digital universe. This presentation will explain what the Learning Registry is, how it is used and how it used LRMI / Schema.org metadata. This will include what has been learned about structuring, validating and sharing LRMI resources, including expressing alignments to learning standards, validation of json-ld and json-schema.

[On the day we failed to connect to Steve via skype, but here are his slides that we missed]

What next for LRMI?

I presented an overview of nine ideas that LRMI could prioritise for future work. These ideas were the basis for a balloon debate, which I will summarise in more detail in my next post.