Seminar: Using Interactive Visualisations to Analyse the Structure and Treatment of Topics in Learning Materials

Title: Using Interactive Visualisations to Analyse the Structure and Treatment of Topics in Learning Materials

Speaker: Tanya Howden, Heriot-Watt University

Date: 11:30 on 14 May 2018

Location: CM F.17, Heriot-Watt University

Abstract: With the amount of information available online growing, it is becoming more and more difficult to find what you are looking for, particularly when you’re in an area that you have very little background in. For example, if you were learning about neural networks for the first time, the number of responses you get from a simple Google search can be overwhelming – how do you know where to start?! This is only one of the many challenges faced when searching for appropriate learning materials.

In this talk, I will be discussing the motivations behind my research interests before introducing and demonstrating a prototype that has been created with the aim to give learners a more engaging environment with unified organisation and access to different materials on one subject.

Three resources about gender bias

These are three resources that look like they might be useful in understanding and avoiding gender bias. They caught my attention because I cover some cognitive biases in the Critical Thinking course I teach. I also cover the advantages of having diverse teams working on problems (the latter based on discussion of How Diversity Makes … Continue reading Three resources about gender bias

The post Three resources about gender bias appeared first on Sharing and learning.

These are three resources that look like they might be useful in understanding and avoiding gender bias. They caught my attention because I cover some cognitive biases in the Critical Thinking course I teach. I also cover the advantages of having diverse teams working on problems (the latter based on discussion of How Diversity Makes Us Smarter in SciAm). Finally, like any responsible  teacher in information systems & computer science I am keen to see more women in my classes.

Iris Bohnet on BBC Radio 4 Today programme 3 January.  If you have access via a UK education institution with an ERA licence you can listen to the clip via the BUFVC Box of Broadcasts.  Otherwise here’s a quick summary. Bohnet stresses that much gender bias is unconscious, individuals may not be aware that they act in biased ways. Awareness of the issue and diversity training is not enough on its own to ensure fairness. She stresses that organisational practise and procedures are the easiest effective way to remove bias. One example she quotes is that to recruit more male teachers job adverts should not “use adjectives that in our minds stereotypically are associated with women such as compassionate, warm, supportive, caring.” This is not because teachers should not have these attributes or that men cannot be any of these, but because research shows[*] that these attributes are associated with women and may subconsciously deter male applicants.

[*I don’t like my critical thinking students saying broad and vague things like ‘research shows that…’. It’s ok for 3 minute slot on a breakfast news show but I’ll have to do better. I hope the details are somewhere in Iris Bohnet, (2016). What Works: Gender Equality by Design]

This raised a couple of questions in my mind. If gender bias is unconscious, how do you know you do it? And, what can you do about it? That reminded me of two other things I had seen on bias over the last year.

An Implicit Association Test (IAT) on Gender-Career associations, which  I took a while back. It’s a clever little test based on how quickly you can classify names and career attributes. You can read more information about them on the Project Implicit website  or try the same test that I did (after a few disclaimers and some other information gathering, it’s currently the first one on their list).

A gender bias calculator for recommendation letters based on the words that might be associated with stereotypically male or female attributes. I came across this via Athene Donald’s blog post Do You Want to be Described as Hard Working? which describes the issue of subconscious bias in letters of reference. I guess this is the flip side of the job advert example given by Bohnet. There is lots of other useful and actionable advice in that blog post, so if you haven’t read it yet do so now.

The post Three resources about gender bias appeared first on Sharing and learning.

Reflective learning logs in computer science

Do you have any comments, advice or other pointers on how to guide students to maintaining high quality reflective learning logs? Context: I teach part of a first year computer science / information systems course on Interactive Systems.  We have some assessed labs where we set the students fixed tasks to work on, and there is … Continue reading Reflective learning logs in computer science

The post Reflective learning logs in computer science appeared first on Sharing and learning.

Do you have any comments, advice or other pointers on how to guide students to maintaining high quality reflective learning logs?

Context: I teach part of a first year computer science / information systems course on Interactive Systems.  We have some assessed labs where we set the students fixed tasks to work on, and there is coursework. For the coursework the students have to create an app of  their own devising. They start with something simple (think of it as a minimum viable product) but then extend it to involve interaction with the environment (using their device’s sensors), other people, or other systems. Among of the objectives of the course are that: students learn to take responsibility for their own learning,  to appreciate their own strengths and weaknesses, and what is possible within time constraints. We also want students to gain experience in conceiving, designing and implementing an interactive app, and we want them to reflect on and provide evidence about the effectiveness of the approach they took.

Part of the assessment for this course is by way of the students keeping reflective learning logs, which I am now marking.  I am trying to think how I could better guide the students to write substantial, analytic posts (including how to encourage engagement from those students who don’t see the point to keeping a log).

Guidance and marking criteria

Based on those snippets of feedback that I found myself repeating over and over, here’s what I am thinking to provide as guidance to next year’s students:

  • The learning log should be filled in whenever you work on your app, which should be more than just during the lab sessions.
  • For set labs entries with the following structure will help bring out the analytic elements:
    • What was I asked to do?
    • What did I anticipate would be difficult?
    • What did I find to be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students?
    • What would I do differently if I had to do this again?
  • For coursework entries the structure can be amended to:
    • What did I do?
    • What did I find to be difficult? How did this compare to what I anticipated would be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students on my work so far?
    • What would I do differently if I had to do this again?
    • What do I plan to do next?
    • What do I anticipate to be difficult?
    • How do I plan to overcome outstanding issues and expected difficulties.

These reflective learning logs are marked out of 5 in the middle of the course and again at the end (so represent 10% of the total course mark), according to the following criteria

  1. contributions: No entries, or very brief (i.e. one or two sentences) entries only: no marks. Regular entries, more than once per week, with substantial content: 2 marks.
  2. analysis: Brief account of events only or verbatim repetition of notes: no marks. Entries which include meaningful plans with reflection on whether they worked; analysis of problems and how they were solved; and evidence of re-evaluation plans as a result of what was learnt during the implementation and/or as a result of feedback from others: 3 marks.
  3. note: there are other ways of doing really well or really badly than are covered above.

Questions

Am I missing anything from the guidance and marking criteria?

How can I encourage students who don’t see the point of keeping a reflective learning log? I guess some examples of where they are important with respect to professional practice in computing.

These are marked twice, using rubrics in Blackboard, in the middle of the semester and at the end. Is there any way of attaching two grading rubrics to the same assessed log in Blackboard? Or a workaround to set the same blog as two graded assignments?

Answers on a postcard… Or the comments section below. Or email.

The post Reflective learning logs in computer science appeared first on Sharing and learning.

Reflective learning logs in computer science

Do you have any comments, advice or other pointers on how to guide students to maintaining high quality reflective learning logs? Context: I teach part of a first year computer science / information systems course on Interactive Systems.  We have some assessed labs where we set the students fixed tasks to work on, and there is … Continue reading Reflective learning logs in computer science

The post Reflective learning logs in computer science appeared first on Sharing and learning.

Do you have any comments, advice or other pointers on how to guide students to maintaining high quality reflective learning logs?

Context: I teach part of a first year computer science / information systems course on Interactive Systems.  We have some assessed labs where we set the students fixed tasks to work on, and there is coursework. For the coursework the students have to create an app of  their own devising. They start with something simple (think of it as a minimum viable product) but then extend it to involve interaction with the environment (using their device’s sensors), other people, or other systems. Among of the objectives of the course are that: students learn to take responsibility for their own learning,  to appreciate their own strengths and weaknesses, and what is possible within time constraints. We also want students to gain experience in conceiving, designing and implementing an interactive app, and we want them to reflect on and provide evidence about the effectiveness of the approach they took.

Part of the assessment for this course is by way of the students keeping reflective learning logs, which I am now marking.  I am trying to think how I could better guide the students to write substantial, analytic posts (including how to encourage engagement from those students who don’t see the point to keeping a log).

Guidance and marking criteria

Based on those snippets of feedback that I found myself repeating over and over, here’s what I am thinking to provide as guidance to next year’s students:

  • The learning log should be filled in whenever you work on your app, which should be more than just during the lab sessions.
  • For set labs entries with the following structure will help bring out the analytic elements:
    • What was I asked to do?
    • What did I anticipate would be difficult?
    • What did I find to be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students?
    • What would I do differently if I had to do this again?
  • For coursework entries the structure can be amended to:
    • What did I do?
    • What did I find to be difficult? How did this compare to what I anticipated would be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students on my work so far?
    • What would I do differently if I had to do this again?
    • What do I plan to do next?
    • What do I anticipate to be difficult?
    • How do I plan to overcome outstanding issues and expected difficulties.

These reflective learning logs are marked out of 5 in the middle of the course and again at the end (so represent 10% of the total course mark), according to the following criteria

  1. contributions: No entries, or very brief (i.e. one or two sentences) entries only: no marks. Regular entries, more than once per week, with substantial content: 2 marks.
  2. analysis: Brief account of events only or verbatim repetition of notes: no marks. Entries which include meaningful plans with reflection on whether they worked; analysis of problems and how they were solved; and evidence of re-evaluation plans as a result of what was learnt during the implementation and/or as a result of feedback from others: 3 marks.
  3. note: there are other ways of doing really well or really badly than are covered above.

Questions

Am I missing anything from the guidance and marking criteria?

How can I encourage students who don’t see the point of keeping a reflective learning log? I guess some examples of where they are important with respect to professional practice in computing.

These are marked twice, using rubrics in Blackboard, in the middle of the semester and at the end. Is there any way of attaching two grading rubrics to the same assessed log in Blackboard? Or a workaround to set the same blog as two graded assignments?

Answers on a postcard… Or the comments section below. Or email.

The post Reflective learning logs in computer science appeared first on Sharing and learning.

Reflective learning logs in computer science

Do you have any comments, advice or other pointers on how to guide students to maintaining high quality reflective learning logs? Context: I teach part of a first year computer science / information systems course on Interactive Systems.  We have some assessed labs where we set the students fixed tasks to work on, and there is … Continue reading Reflective learning logs in computer science

The post Reflective learning logs in computer science appeared first on Sharing and learning.

Do you have any comments, advice or other pointers on how to guide students to maintaining high quality reflective learning logs?

Context: I teach part of a first year computer science / information systems course on Interactive Systems.  We have some assessed labs where we set the students fixed tasks to work on, and there is coursework. For the coursework the students have to create an app of  their own devising. They start with something simple (think of it as a minimum viable product) but then extend it to involve interaction with the environment (using their device’s sensors), other people, or other systems. Among of the objectives of the course are that: students learn to take responsibility for their own learning,  to appreciate their own strengths and weaknesses, and what is possible within time constraints. We also want students to gain experience in conceiving, designing and implementing an interactive app, and we want them to reflect on and provide evidence about the effectiveness of the approach they took.

Part of the assessment for this course is by way of the students keeping reflective learning logs, which I am now marking.  I am trying to think how I could better guide the students to write substantial, analytic posts (including how to encourage engagement from those students who don’t see the point to keeping a log).

Guidance and marking criteria

Based on those snippets of feedback that I found myself repeating over and over, here’s what I am thinking to provide as guidance to next year’s students:

  • The learning log should be filled in whenever you work on your app, which should be more than just during the lab sessions.
  • For set labs entries with the following structure will help bring out the analytic elements:
    • What was I asked to do?
    • What did I anticipate would be difficult?
    • What did I find to be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students?
    • What would I do differently if I had to do this again?
  • For coursework entries the structure can be amended to:
    • What did I do?
    • What did I find to be difficult? How did this compare to what I anticipated would be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students on my work so far?
    • What would I do differently if I had to do this again?
    • What do I plan to do next?
    • What do I anticipate to be difficult?
    • How do I plan to overcome outstanding issues and expected difficulties.

These reflective learning logs are marked out of 5 in the middle of the course and again at the end (so represent 10% of the total course mark), according to the following criteria

  1. contributions: No entries, or very brief (i.e. one or two sentences) entries only: no marks. Regular entries, more than once per week, with substantial content: 2 marks.
  2. analysis: Brief account of events only or verbatim repetition of notes: no marks. Entries which include meaningful plans with reflection on whether they worked; analysis of problems and how they were solved; and evidence of re-evaluation plans as a result of what was learnt during the implementation and/or as a result of feedback from others: 3 marks.
  3. note: there are other ways of doing really well or really badly than are covered above.

Questions

Am I missing anything from the guidance and marking criteria?

How can I encourage students who don’t see the point of keeping a reflective learning log? I guess some examples of where they are important with respect to professional practice in computing.

These are marked twice, using rubrics in Blackboard, in the middle of the semester and at the end. Is there any way of attaching two grading rubrics to the same assessed log in Blackboard? Or a workaround to set the same blog as two graded assignments?

Answers on a postcard… Or the comments section below. Or email.

The post Reflective learning logs in computer science appeared first on Sharing and learning.

Reflective learning logs in computer science

Do you have any comments, advice or other pointers on how to guide students to maintaining high quality reflective learning logs? Context: I teach part of a first year computer science / information systems course on Interactive Systems.  We have some assessed labs where we set the students fixed tasks to work on, and there is … Continue reading Reflective learning logs in computer science

The post Reflective learning logs in computer science appeared first on Sharing and learning.

Do you have any comments, advice or other pointers on how to guide students to maintaining high quality reflective learning logs?

Context: I teach part of a first year computer science / information systems course on Interactive Systems.  We have some assessed labs where we set the students fixed tasks to work on, and there is coursework. For the coursework the students have to create an app of  their own devising. They start with something simple (think of it as a minimum viable product) but then extend it to involve interaction with the environment (using their device’s sensors), other people, or other systems. Among of the objectives of the course are that: students learn to take responsibility for their own learning,  to appreciate their own strengths and weaknesses, and what is possible within time constraints. We also want students to gain experience in conceiving, designing and implementing an interactive app, and we want them to reflect on and provide evidence about the effectiveness of the approach they took.

Part of the assessment for this course is by way of the students keeping reflective learning logs, which I am now marking.  I am trying to think how I could better guide the students to write substantial, analytic posts (including how to encourage engagement from those students who don’t see the point to keeping a log).

Guidance and marking criteria

Based on those snippets of feedback that I found myself repeating over and over, here’s what I am thinking to provide as guidance to next year’s students:

  • The learning log should be filled in whenever you work on your app, which should be more than just during the lab sessions.
  • For set labs entries with the following structure will help bring out the analytic elements:
    • What was I asked to do?
    • What did I anticipate would be difficult?
    • What did I find to be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students?
    • What would I do differently if I had to do this again?
  • For coursework entries the structure can be amended to:
    • What did I do?
    • What did I find to be difficult? How did this compare to what I anticipated would be difficult?
    • What helped me achieve the outcome? These may be resources that helped in understanding how to do the task or tactics used when stuck.
    • What problems could I not overcome?
    • What feedback did I get from teaching staff and other students on my work so far?
    • What would I do differently if I had to do this again?
    • What do I plan to do next?
    • What do I anticipate to be difficult?
    • How do I plan to overcome outstanding issues and expected difficulties.

These reflective learning logs are marked out of 5 in the middle of the course and again at the end (so represent 10% of the total course mark), according to the following criteria

  1. contributions: No entries, or very brief (i.e. one or two sentences) entries only: no marks. Regular entries, more than once per week, with substantial content: 2 marks.
  2. analysis: Brief account of events only or verbatim repetition of notes: no marks. Entries which include meaningful plans with reflection on whether they worked; analysis of problems and how they were solved; and evidence of re-evaluation plans as a result of what was learnt during the implementation and/or as a result of feedback from others: 3 marks.
  3. note: there are other ways of doing really well or really badly than are covered above.

Questions

Am I missing anything from the guidance and marking criteria?

How can I encourage students who don’t see the point of keeping a reflective learning log? I guess some examples of where they are important with respect to professional practice in computing.

These are marked twice, using rubrics in Blackboard, in the middle of the semester and at the end. Is there any way of attaching two grading rubrics to the same assessed log in Blackboard? Or a workaround to set the same blog as two graded assignments?

Answers on a postcard… Or the comments section below. Or email.

The post Reflective learning logs in computer science appeared first on Sharing and learning.

XKCD or OER for critical thinking

I teach half a course on Critical Thinking to 3rd year Information Systems students. A colleague takes the first half which covers statistics. I cover how science works including the scientific method, experimental design, how to read a research papers, how to spot dodgy media reports of science and pseudoscience, and reproducibility in science; how to argue, … Continue reading XKCD or OER for critical thinking

I teach half a course on Critical Thinking to 3rd year Information Systems students. A colleague takes the first half which covers statistics. I cover how science works including the scientific method, experimental design, how to read a research papers, how to spot dodgy media reports of science and pseudoscience, and reproducibility in science; how to argue, which is mostly how to spot logical fallacies; and a little on cognitive development. One the better things about teaching on this course is that a lot of it is covered by XKCD, and that XKCD is CC licensed. Open Education Resources can be fun.

how scientists think

[explain]

hypothesis testing

Hell, my eighth grade science class managed to conclusively reject it just based on a classroom experiment. It's pretty sad to hear about million-dollar research teams who can't even manage that.

[explain]

Blind trials

[explain]

Interpreting statistics

[explain]

p hacking

[explain]

Confounding variables

There are also a lot of global versions of this map showing traffic to English-language websites which are indistinguishable from maps of the location of internet users who are native English speakers

[explain]

Extrapolation

[explain]

[explain]

Confirmation bias in information seeking

[explain]

[explain]

undistributed middle

[explain]

post hoc ergo propter hoc

Or correlation =/= causation.

He holds the laptop like that on purpose, to make you cringe.

[explain]

[explain]

Bandwagon Fallacy…

…and fallacy fallacy

[explain]

Diversity and inclusion

[explain]

Quick notes: Ian Pirie on assessment

Ian Pirie Asst Principal for Learning Developments at University of Edinburgh came out to Heriot-Watt yesterday to talk about some assessment and feedback initiatives at UoE.  The background ideas motivating what they have been doing are not new, and Ian didn’t say that they were, they’re centred around the pedagogy of assessment & feedback as learning, … Continue reading Quick notes: Ian Pirie on assessment

Ian Pirie Asst Principal for Learning Developments at University of Edinburgh came out to Heriot-Watt yesterday to talk about some assessment and feedback initiatives at UoE.  The background ideas motivating what they have been doing are not new, and Ian didn’t say that they were, they’re centred around the pedagogy of assessment & feedback as learning, and the generally low student satisfaction relating to feedback shown though the USS. Ian did make a very compelling argument about the focus of assessment: he asked whether we thought the point of assessment was

  1. to ensure standards are maintained [e.g. only the best will pass]
  2. to show what students have learnt,
    or
  3. to help students learn.

The responses from the room were split 2:1 between answers 2 and 3, showing progress away from the exam-as-a-hurdle model of assessment. Ian’s excellent point was that if you design your assessment to help students learn, that will mean doing things like making sure  your assessments address the right objectives, that the students understand these learning objectives and criteria, and that they get feedback which is useful to them, then you will also address points 2 and 1.

Ideas I found interesting from the initiatives at UoE, included

  • Having students describe learning objectives in their own words, to check they understand them (or at least have read them).
  • Giving students verbal feedback and having them write it up themselves (for the same reason). Don’t give students their mark until they have done this, that means they won’t avoid doing it but also once students know they have / have not done “well enough” their interest in the assessment wanes.
  • Peer marking with adaptive comparative judgement. Getting students to rank other students’ work leads to reliable marking (the course leader can assess which pieces of work sit on grade boundaries if that’s what you need)

In the context of that last one, Ian mention No More Marking which has links with the Mathematics Learning Support Centre at Loughborough University. I would like to know more about how many comparisons need to be made before a reliable rank ordering is arrived at, which will influence how practical the approach is given the number of students on a course and the length of the work being marked (you wouldn’t want all students to have to mark all submissions if each submission was many pages long). But given the advantages of peer marking on getting students to reflect on what were the objectives for a specific assessment I am seriously considering using the approach to mark a small piece of coursework from my design for online learning course. There’s the additional rationale there that it illustrates the use of technology to manage assessment and facilitate a pedagogic approach, showing that computer aided assessment goes beyond multiple choice objective tests, which is part of the syllabus for that course.