David Robb
PhD MSc BSc

My Google Scholar profile.
My Research Gate profile.
My HWU Staff page.

I am a Research Fellow at Heriot-Watt University. I completed my PhD (Sept 2011 to Feb 2015) here at HWU.

Current EPSRC/UKRI funded post at the National Robotarium in Edinburgh

I am currently working on the DeMILO project (EPSRC Developing Machine Learning-empowered Responsive Manufacture of Industrial Laser Systems), EPSRC HUman-machine teaming for Maritime Environments (HUME), and the UKRI Trustworthy Autonomous Systems Node in Trust at the National Robotarium in Edinburgh.

Work for the ORCA Hub funded by EPSRC and industry

Logo - ORCA Hub funded by EPSRC and industry
I was Experimental Lead on the Human Robot Interaction theme of the ORCA Hub project (Offshore Robotics for Certification of Assets), funded by EPSRC and industry. The aims of the hub project can be found here:https://orcahub.org. The ORCA Hub team included over 40 collaborating researchers; some here at the Edinburgh Centre for Robotics which is a collaboration between Heriot-Watt University and The University of Edinburgh; others at Imperial College London, University of Oxford, and University of Liverpool.
Institutions - ORCA Hub funded by EPSRC and industry

Previously

Previously I was working on MIRIAM (Multimodal Intelligent inteRactIon for Autonomous systeMs) funded by dstl under the Defence and Security Accelerator theme, Revolutionise the human information relationship for Defence, with industry partners SeeByte and Tekever.

Other Research Interests

My other research areas are:

  • promoting research collaboration through CSCW
  • user experience when using visualisations of complex data
  • image-based feedback.

Development work on well-sorted.org

I created the additional tools deployed on the www.well-sorted.org website to allow live online input from breakout groups during meetings that have been organised using the original well-sorted pre-meeting idea organisation tools. The in-meeting tools allow meeting attendees to enter a record of their breakout group discussions live during the meeting while the meeting organiser can monitor progress of all the groups and later use the discussion record to support presentations of the group discussions. One part of the tools is an interactive supported networking session visualisation tool. An example of the output from this tool can be seen here. (Scroll down that page). That tool featured in a CSCW demo paper (ACM Digital Library link). Youtube tutorial videos illustrating these in-meeting tools can be seen here (see the playlist of videos entitled “Well Sorted In-Meeting Tools”).

Selected Recent Publications

D.A. Robb , J. Lopes, M.I. Ahmad, P. E. McKenna, X. Liu, K. Lohan and H. Hastie, 2023 Seeing Eye to Eye: Trustworthy Embodiment for Task-based Conversational Agents. Frontiers in Robotics and AI, August 2023, Sec. Human-Robot Interaction, Volume 10 – 2023, https://doi.org/10.3389/frobt.2023.1234767.

M. Y. Lim, D. A. Robb, B. W. Wilson, H. Hastie, 2023. Feeding the Coffee Habit: A Longitudinal Study of a Robo-Barista, RO-MAN’23,In Proceedings of the 32nd IEEE International Conference on Robot and Human Interactive Communication. Awarded Winner IEEE RO-MAN 2023 KROS Interdisciplinary Research Award in Social Human-Robot Interaction.(Link to Prize). (Author Accepted  Manuscript version) https://doi.org/10.1109/RO-MAN57019.2023.10309621

M. Moujahid, D.A. Robb, C. Dondrup, H. Hastie, 2023. Come closer: The Effects of Robot Personality on Human Proxemics Behaviours, RO-MAN’23, In Proceedings of the 32nd IEEE International Conference on Robot and Human Interactive Communication. Awarded Honourable Mention as Finalist in IEEE RO-MAN 2023 KROS Interdisciplinary Research Award in Social Human-Robot Interaction (Link to Prize). (Author Accepted Manuscript version). https://doi.org/10.1109/RO-MAN57019.2023.10309333

I. Rakhmatulin, D. Risbridger, D. A. Robb, R. Carter, M. J. Chantler, and M. S.  Erden, 2023 Addressing shortcomings in manual alignment of laser optics via automation tools. CASE’23, In IEEE International Conference on Automation Science and Engineering. https://doi.org/10.1109/CASE56687.2023.10260476

D.A. Robb, X.Liu, H. Hastie, 2023. Explanation Styles for Trustworthy Autonomous Systems, AAMAS’23, In Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems. (pdf open access from AAMAS proceedingshttps://dl.acm.org/doi/abs/10.5555/3545946.3598913

Y. Dragostinov, D. Harðardóttir, P. E. McKenna, D. Robb, B. Nesset, M. I. Ahmad, M. Romeo, M.Y. Lim, C. Yu, Y. Jang, M. Diab, A. Cangelosi, Y. Demeris, H. Hastie, G. Rajendran, 2022 Preliminary psychometric scale development using the mixed methods Delphi technique, Methods in Psychology. p.100103 https://doi.org/10.1016/j.metip.2022.100103.

M. Y. Lim, J. D. A. Lopes, D. A. Robb, B. W. Wilson, M. Moujahid, E. De Pellegrin, H. Hastie. We are all Individuals: The Role of Robot Personality and Human Traits in Trustworthy Interaction, RO-MAN’22, Proceedings of the 2022 IEEE International Conference on Robot and Human Interactive Communication. Awarded IEEE RO-MAN 2022 KROS Interdisciplinary Research Award in Social Human-Robot Interaction.  https://doi.org/10.1109/RO-MAN53752.2022.9900772.

B. Nesset, D.A. Robb, J. D. A. Lopes, H. Hastie. Transparency in HRI: Trust and Decision Making in the Face of Robot Errors, HRI’21, Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction. https://doi.org/10.1145/3434074.3447183.

M.I. Ahmad, I. Keller, Robb, D. A. and K. Lohan, 2020. A framework to estimate cognitive load using physiological data. Personal and Ubiquitous Computing, 1-15. https://doi.org/10.1007/s00779-020-01455-7

D.A. Robb, M.I. Ahmad, C. Tiseo, S. Aracri, A. C. McConnell, V. Page, C. Dondrup, F. J. Chiyah Garcia, H.-N. Nguyen, È. Pairet, P. Ardón Ramírez, T. Semwal, H.M. Taylor, L.J. Wilson, D. Lane, H. Hastie, K. Lohan. Robots in the Danger Zone: Exploring Public Perception through Engagement, HRI’20, Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction. https://doi.org/10.1145/3319502.3374789.

H. Hastie, D.A. Robb, J. Lopes, M. Ahmad, P. Le Bras, X. Liu, R.P.A. Petrick, M. J. Chantler. Challenges in Collaborative HRI for Remote Robot Teams, CHI 2019 Workshop: The Challenges of Working on Social Robots that Collaborate with People (SIRCHI2019), ACM CHI Conference on Human Factors in Computing Systems. https://arxiv.org/abs/1905.07379.

F. J. Chiyah Garcia, D.A. Robb, H. Hastie. Explainable Autonomy through Natural Language, ES4CPS2019, the Report of the GI-Dagstuhl Seminar 19023 on Explainable Software for Cyber-Physical Systems, Schloss Dagstuhl, Germany. (Workshop website). https://arxiv.org/abs/1904.11851.

D.A. Robb, J. Lopes, S. Padilla, , A. Laskov, F. J. Chiyah Garcia, X. Liu, J.S. Willners, N. Valeyrie, K. S. Lohan, D. Lane, P. Patron, Y. Petillot, M. J. Chantler, H. Hastie. Exploring Interaction with Remote Autonomous Systems using Conversational Agents, DIS’19, Proceedings of the 2019 ACM Conference on Designing Interactive Systems. https://doi.org/10.1145/3322276.3322318.

D.A. Robb, J.S. Willners, N. Valeyrie, F. J. Chiyah Garcia, A. Laskov, X. Liu, P. Patron, H. Hastie, Y. Petillot. A Natural Language Interface and Relayed Acoustic Communications for Improved Command and Control of AUVs, AUV2018, Proceedings of the 2018 IEEE/OES Autonomous Underwater Vehicle Symposium pdf at Arxiv. Definitive version is here at IEEExplore.

F. J. Chiyah Garcia, D.A. Robb, X. Liu, A. Laskov, P. Patron, H. Hastie. Explainable Autonomy: A Study of Explanation Styles for Building Clear Mental Models, INLG’18, Proceedings of the 11th International Conference on Natural Language Generation , ACL. HWU repository page, pdf at HWU repository, pdf at ACL.

D.A. Robb, F. J. Chiyah Garcia, A. Laskov, X. Liu, P. Patron, H. Hastie. Keep Me in the Loop: Increasing Operator Situation Awareness through a Conversational Multimodal Interface, ICMI’18, Proceedings of 20th ACM International Conference on Multimodal Interaction. https://doi.org/10.1145/3242969.3242974.

H. Hastie, F. J. Chiyah Garcia, D.A. Robb, P. Patron, A. Laskov. MIRIAM: a multimodal chat-based interface for autonomous systems, ICMI’17, Proceedings of 19th ACM International Conference
on Multimodal Interaction, pp.495-496
.ACM DL Author-ize servicedx.doi.org/10.1145/3136755.3143022.

D. A. Robb, S. Padilla, T. S. Methven, B.Kalkreuter, M. J. Chantler. Image-based Emotion Feedback: How Does the Crowd Feel? And Why? DIS’17: Proceedings of the 2017 ACM Conference on Designing Interactive Systems. Awarded Honourable Mention ACM DIS 2017 Research Papers and Notes. dx.doi.org/10.1145/3064663.3064665.
(See also dis2017.org/awards)

D. A. Robb, S. Padilla, T. S. Methven, B.Kalkreuter, M. J. Chantler. A Picture Paints a Thousand Words but Can it Paint Just One? DIS’16: Proceedings of the 2016 ACM Conference on Designing Interactive Systems. dx.doi.org/10.1145/2901790.2901791.

D. A. Robb, T. S. Methven, S. Padilla, M. J. Chantler. Well-Connected: Promoting Collaboration by Effective Networking CSCW’16:19th ACM Conference on Computer Supported Cooperative Work & Social Computing Proceedings Companion dx.doi.org/10.1145/2818052.2874333.
See an example of this application in action near the bottom of this page here.

D. A. Robb, S. Padilla, B. Kalkreuter, M. J. Chantler. Crowdsourced Feedback With Imagery Rather Than Text: Would Designers Use It? CHI’15: 33rd Annual ACM Conference on Human Factors in Computing Systems Proceedings dx.doi.org/10.1145/2702123.2702470.

D. A. Robb, S. Padilla, B. Kalkreuter, M. J. Chantler. Moodsource: Enabling Perceptual and Emotional Feedback from Crowds CSCW’15: 18th ACM Conference on Computer Supported Cooperative Work & Social Computing Proceedings Companion dx.doi.org/10.1145/2685553.2702676.

PhD Research

My PhD work was on CDI “Head-Crowd” project, an interdisciplinary project in the Schools of Mathematics and Computer Science (MACS) and Textile and Design (TEX). The focus is on perceptual image browsing, visual communication, visual summary and interpretation of visual design feedback.

Visual Crowd Communication

Moodsource: Enabling Perceptual and Emotional Feedback from Crowds

Moodsource: Enabling Perceptual and Emotional Feedback from Crowds
Part of my work on capturing visual feedback has involved building a perceptually relevant image browser populated with a set of abstract images. The images were screen-scraped from Flickr.com (See project acknowledgements below). Human perceptions of the relative similarity of the images were captured using techniques devised by Dr. Fraser Halley (See his PhD thesis under “PhD Thesis” in the Publications tab at the top of this page).

Abstract Image Set in the SOM Browser


SOM thumb image

The project abstract image set can be viewed in the Self Organising Map Browser. Images judged by observers as being highly similar to each other are grouped in stacks together. Adjacent stacks contain images judged more similar to each other than stacks farther apart. Note how the observers’ similarity judgements and the SOM construction algorithm has resulted in apparently themed regions in the browser. e.g. architectural at the top right, and highly abstract, colourful at the top left. Click the thumbnail image above to try out an HTML version of the SOM browser. (In our experiment it was built it in Flash and deployed to iOS on iPads.)

Try the browser in its 7×5 stack format as used in some of the more recent experiments which used a web app interface.

The SOM browser features in the recent publications and additionally in following publications:

S. Padilla, D. Robb, F. Halley, M. J. Chantler, Browsing Abstract Art by Appearance
Predicting Perceptions: The 3rd International Conference on Appearance, 17-19 April, 2012, Edinburgh, UK. Conference Proceedings Publication ISBN: 978-1-4716-6869-2, Pages: 100-103 Download PDF

S. Padilla, F. Halley, D. Robb, M. J. Chantler, Intuitive Large Image Database Browsing using Perceptual Similarity Enriched by Crowds Computer Analysis of Images and Patterns Lecture Notes in Computer Science Volume 8048, 2013, pp 169-176 Springer Link

Abstract Image Set in a 3D MDS Visualisation

x3D thumb image
View the image set in a 3D view as an animated GIF (8Mb)

The collective similarity judgements of human observers about the images can, perhaps, be better visualised in 3D “similarity” space. The closer an image is to another; the more similar those images were judged to be by observers. Conversely, the farther away two images are, the less similar they were judged to be. The similarity data was used as input to construct the SOM browser.

Perceptually Relevant Image Summaries


collage image

If a crowd were asked to give feedback by selecting images from the browser the gathered image selections might be so large in number as to overwhelm a designer seeking the feedback. To address this problem we developed an algorithm to generate visual summaries consisting of representative images. The algorithm uses clustering based on the image selections and the human perceptual similarity data previously gathered on the image set. (This is the same data that is used to organise the abstract image browser). The algorithm is described in the CHI’15 paper.

The image summarisation also features in the CSCW’15 extended abstract and additionally in following publications:

B. Kalkreuter and D. Robb. HeadCrowd: visual feedback for design in the Nordic Textile Journal, Special edition: Sustainability & Innovation in the Fashion Field, issue 1/2012, ISSN 1404-2487, CTF Publishing, Borås, Sweden. Pages 70-81 Download PDF

B. Kalkreuter, D. Robb, S. Padilla, M. J. Chantler Managing Creative Conversations Between Designers and Consumers Future Scan 2: Collective Voices, Association of Fashion and Textile Courses Conference Proceedings 2013, ISBN 978-1-907382-64-2, Pages 90-99 Download PDF

Communication Experiment

collage image
To show that communication is possible using the crowdsourced visual feedback method (CVFM) we carried out an experiment in which a group of participants were shown terms and asked to choose images from the abstract image browser to represent those terms. Summaries were made from the gathered images. The raw term image selections and the summaries were shown to another group of participants who rated the degree to which they could see the meaning of the terms in the stimuli they were shown. The term weights output by the second group of participants allowed the effectiveness of the communication and of the summarisation to be measured.

Title screen of video. Links to video.



A video describes the experiment which features in the DIS’16 paper in the recent publications. The terms, the raw term image selection lists, and the algorithmically generated summaries from the experiment can be viewed using this viewer web application: The Fb Viewer (V2) is designed to be tablet (and ‘fablet’) friendly. It uses the latest jQueryMobile beta v 1.3. At the time of writing there is a slight problem with jQM v1.3 and Internet Explorer so if you are using Internet Explorer you may wish to view the desktop version.

Emotive Image Browser

To allow more figurative communication, a second browser was built. 2000 images were categorized by tagging them with terms from an emotion model. Thus every image has a normalized emotion tag frequency profile (see image above) representing the judgments of 20 paid, crowdsourced, participants.

Emotion image with profile

Using these profiles, the set was filtered to 204 images covering a subset of emotions suited to design conversation. The emotive images are arranged in a SOM browser defined by the emotion profiles (frequency vectors) in a similar way to the abstract browser (based on similarity vectors).

Try the emotion image browser in its 7×5 stack format as used in some of the more recent experiments which used a web app interface.

The emotion image browser features in the CHI’15 paper and the CSCW’15 extended abstract.

Evaluation

The crowdsourced visual feedback method (CVFM) was evaluated in a study with  interior design students puting forward their designs for feedback and a group of student participants acting as the crowd giving feedback. The crowd were shown the designs in a random order and asked “How did the design make you feel?”. They were asked to give thier feedback in the form of abstract images, emotive images and text. In the recent publications, the CHI’15 paper reports the designer side of the evaluation and the CSCW’15 extended abstract reports the crowd side. Below is a link to a 30 second video which accompanies the CHI paper.

Further Work on Image-based Emotion Feedback and Cognitive Styles

In 2016 I had an opportunity to investigate the idea put forward in the CSCW’15 extended abstract that perhaps the experience of the feedback-givers in the crowd was influenced by their cognitive styles. I recruited 50 internet users from 19 to 77 years old, measured their cognititive styles with Blazhenkova & Kozhevnikov’s (2009) OSIVQ self report questionnare. They then did a feedback task rating three feedback formats (abstract images, emotion images and text). They also completed a post-task survey of mainly open questions. We found that engagement for the emotion images was significantly positively correlated with the degree to which participants were more visual than verbal in cognitive style. This is reported in a DIS’17 paper.

Acknowledgments

Image sets:-
The project has established two databases of images for use in visual feedback. The images in the databases have been sourced from Google and Flickr and all have a Creative Commons licences. These contributors are acknowledged below.

Acknowledgement of Creative Commons images
The databases can be downloaded from here

References

Olesya Blazhenkova and Maria Kozhevnikov. 2009. The new object-spatial-verbal cognitive style model: Theory and measurement. Applied Cognitive Psychology, 23(5), 638-663.Search for this paper

MSc

I did my MSc here at Heriot-Watt. I made a rich web application called The Dendrogrammer as part of my dissertation. Here is a link to a demo of that app.

Comments are closed.