My Google Scholar profile<\/a>. I am a Research Fellow at Heriot-Watt University. I completed my PhD (Sept 2011 to Feb 2015) here at HWU.<\/p>\n\n\n\n I am currently working on the DeMILO project (EPSRC Developing Machine Learning-empowered Responsive Manufacture of Industrial Laser Systems), EPSRC HUman-machine teaming for Maritime Environments (HUME), and the UKRI Trustworthy Autonomous Systems Node in Trust at the National Robotarium in Edinburgh.<\/p>\n\n\n\n Previously I was working on MIRIAM <\/a>(Multimodal Intelligent inteRactIon for Autonomous systeMs) funded by dstl<\/a> under the Defence and Security Accelerator<\/a> theme, Revolutionise the human information relationship for Defence<\/a>, with industry partners SeeByte<\/a> and Tekever<\/a>.<\/p>\n\n\n\n My other research areas are:<\/p>\n\n\n\n I created the additional tools deployed on the www.well-sorted.org website to allow live online input from breakout groups during meetings that have been organised using the original well-sorted pre-meeting idea organisation tools. The in-meeting tools allow meeting attendees to enter a record of their breakout group discussions live during the meeting while the meeting organiser can monitor progress of all the groups and later use the discussion record to support presentations of the group discussions. One part of the tools is an interactive supported networking session visualisation tool. An example of the output from this tool can be seen here<\/a>. (Scroll down that page). That tool featured in a CSCW demo paper (ACM Digital Library link<\/a>). Youtube tutorial videos illustrating these in-meeting tools can be seen here<\/a> (see the playlist of videos entitled \u201cWell Sorted In-Meeting Tools\u201d).<\/p>\n\n\n\n D.A. Robb , J. Lopes, M.I. Ahmad, P. E. McKenna, X. Liu, K. Lohan and H. Hastie, 2023 Seeing Eye to Eye: Trustworthy Embodiment for Task-based Conversational Agents<\/strong>. Frontiers in Robotics and AI, August 2023, Sec. Human-Robot Interaction, Volume 10 – 2023, https:\/\/doi.org\/10.3389\/frobt.2023.1234767<\/a>.<\/p>\n\n\n\n M. Y. Lim, D. A. Robb, B. W. Wilson, H. Hastie, 2023. Feeding the Coffee Habit: A Longitudinal Study of a Robo-Barista<\/strong>, RO-MAN’23,In Proceedings of the 32nd IEEE International Conference on Robot and Human Interactive Communication. Awarded Winner IEEE RO-MAN 2023 KROS Interdisciplinary Research Award in Social Human-Robot Interaction.(Link to Prize)<\/a><\/span><\/em>.<\/em> (Author Accepted Manuscript version<\/a>) https:\/\/doi.org\/10.1109\/RO-MAN57019.2023.10309621<\/p>\n\n\n\n M. Moujahid, D.A. Robb, C. Dondrup, H. Hastie, 2023. Come closer: The Effects of Robot Personality on Human Proxemics Behaviours<\/strong>, RO-MAN’23, In Proceedings of the 32nd IEEE International Conference on Robot and Human Interactive Communication. Awarded Honourable Mention as Finalist in IEEE RO-MAN 2023 KROS Interdisciplinary Research Award in Social Human-Robot Interaction (Link to Prize<\/a>).<\/span><\/em> (Author Accepted Manuscript version<\/a>). https:\/\/doi.org\/10.1109\/RO-MAN57019.2023.10309333<\/p>\n\n\n\n I. Rakhmatulin, D. Risbridger, D. A. Robb, R. Carter, M. J. Chantler, and M. S.\u00a0 Erden, 2023 Addressing shortcomings in manual alignment of laser optics via automation tools<\/strong>. CASE’23<\/em>, In IEEE International Conference on Automation Science and Engineering. https:\/\/doi.org\/10.1109\/CASE56687.2023.10260476<\/p>\n\n\n\n D.A. Robb, X.Liu, H. Hastie, 2023. Explanation Styles for Trustworthy Autonomous Systems<\/strong>, AAMAS’23, In Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems.<\/em> (pdf open access from AAMAS proceedings<\/a>) https:\/\/dl.acm.org\/doi\/abs\/10.5555\/3545946.3598913<\/a><\/p>\n\n\n\n Y. Dragostinov, D. Har\u00f0ard\u00f3ttir, P. E. McKenna, D. Robb, B. Nesset, M. I. Ahmad, M. Romeo, M.Y. Lim, C. Yu, Y. Jang, M. Diab, A. Cangelosi, Y. Demeris, H. Hastie, G. Rajendran, 2022 Preliminary psychometric scale development using the mixed methods Delphi technique<\/strong>, Methods in Psychology. p.100103<\/em> https:\/\/doi.org\/10.1016\/j.metip.2022.100103<\/a>.<\/p>\n\n\n\n M. Y. Lim, J. D. A. Lopes, D. A. Robb, B. W. Wilson, M. Moujahid, E. De Pellegrin, H. Hastie. We are all Individuals: The Role of Robot Personality and Human Traits in Trustworthy Interaction<\/strong>, RO-MAN’22, Proceedings of the 2022 IEEE International Conference on Robot and Human Interactive Communication. Awarded IEEE RO-MAN 2022 KROS Interdisciplinary Research Award in Social Human-Robot Interaction. <\/span><\/em> https:\/\/doi.org\/10.1109\/RO-MAN53752.2022.9900772<\/a>.<\/p>\n\n\n\n B. Nesset, D.A. Robb, J. D. A. Lopes, H. Hastie. Transparency in HRI: Trust and Decision Making in the Face of Robot Errors<\/strong>, HRI’21, Proceedings of the 2021 ACM\/IEEE International Conference on Human-Robot Interaction.<\/em> https:\/\/doi.org\/10.1145\/3434074.3447183<\/a>.<\/p>\n\n\n\n M.I. Ahmad, I. Keller, Robb, D. A. and K. Lohan, 2020. A framework to estimate cognitive load using physiological data<\/strong>. Personal and Ubiquitous Computing, 1-15. https:\/\/doi.org\/10.1007\/s00779-020-01455-7<\/a><\/p>\n\n\n\n D.A. Robb, M.I. Ahmad, C. Tiseo, S. Aracri, A. C. McConnell, V. Page, C. Dondrup, F. J. Chiyah Garcia, H.-N. Nguyen, \u00c8. Pairet, P. Ard\u00f3n Ram\u00edrez, T. Semwal, H.M. Taylor, L.J. Wilson, D. Lane, H. Hastie, K. Lohan. Robots in the Danger Zone: Exploring Public Perception through Engagement<\/strong>, HRI’20, Proceedings of the 2020 ACM\/IEEE International Conference on Human-Robot Interaction.<\/em> https:\/\/doi.org\/10.1145\/3319502.3374789<\/a>.<\/p>\n\n\n\n H. Hastie, D.A. Robb, J. Lopes, M. Ahmad, P. Le Bras, X. Liu, R.P.A. Petrick, M. J. Chantler. Challenges in Collaborative HRI for Remote Robot Teams<\/strong>, CHI 2019 Workshop: The Challenges of Working on Social Robots that Collaborate with People (SIRCHI2019), ACM CHI Conference on Human Factors in Computing Systems.<\/em> https:\/\/arxiv.org\/abs\/1905.07379<\/a>.<\/p>\n\n\n\n F. J. Chiyah Garcia, D.A. Robb, H. Hastie. Explainable Autonomy through Natural Language<\/strong>, ES4CPS2019, the Report of the GI-Dagstuhl Seminar 19023 on Explainable Software for Cyber-Physical Systems, Schloss Dagstuhl, Germany. (Workshop website<\/a>).<\/em> https:\/\/arxiv.org\/abs\/1904.11851<\/a>.<\/p>\n\n\n\n D.A. Robb, J. Lopes, S. Padilla, , A. Laskov, F. J. Chiyah Garcia, X. Liu, J.S. Willners, N. Valeyrie, K. S. Lohan, D. Lane, P. Patron, Y. Petillot, M. J. Chantler, H. Hastie. Exploring Interaction with Remote Autonomous Systems using Conversational Agents<\/strong>, DIS\u201919, Proceedings of the 2019 ACM Conference on Designing Interactive Systems.<\/em> https:\/\/doi.org\/10.1145\/3322276.3322318<\/a>.<\/p>\n\n\n\n D.A. Robb, J.S. Willners, N. Valeyrie, F. J. Chiyah Garcia, A. Laskov, X. Liu, P. Patron, H. Hastie, Y. Petillot. A Natural Language Interface and Relayed Acoustic Communications for Improved Command and Control of AUVs<\/strong>, AUV2018, Proceedings of the 2018 IEEE\/OES Autonomous Underwater Vehicle Symposium <\/em>pdf at Arxiv<\/a>. Definitive version is here at IEEExplore<\/a>.<\/p>\n\n\n\n F. J. Chiyah Garcia, D.A. Robb, X. Liu, A. Laskov, P. Patron, H. Hastie. Explainable Autonomy: A Study of Explanation Styles for Building Clear Mental Models<\/strong>, INLG\u201918, Proceedings of the 11th International Conference on Natural Language Generation <\/em>, ACL. HWU repository page<\/a>, pdf at HWU repository<\/a>, pdf at ACL<\/a>.<\/p>\n\n\n\n D.A. Robb, F. J. Chiyah Garcia, A. Laskov, X. Liu, P. Patron, H. Hastie. Keep Me in the Loop: Increasing Operator Situation Awareness through a Conversational Multimodal Interface<\/strong>, ICMI\u201918, Proceedings of 20th ACM International Conference on Multimodal Interaction. <\/em>https:\/\/doi.org\/10.1145\/3242969.3242974<\/a>.<\/p>\n\n\n\n H. Hastie, F. J. Chiyah Garcia, D.A. Robb, P. Patron, A. Laskov. MIRIAM: a multimodal chat-based interface for autonomous systems<\/strong>, ICMI\u201917, Proceedings of 19th ACM International Conference D. A. Robb, S. Padilla, T. S. Methven, B.Kalkreuter, M. J. Chantler. Image-based Emotion Feedback: How Does the Crowd Feel? And Why?<\/strong> DIS\u201917: Proceedings of the 2017 ACM Conference on Designing Interactive Systems<\/em>. Awarded Honourable Mention ACM DIS 2017 Research Papers and Notes<\/span>. dx.doi.org\/10.1145\/3064663.3064665<\/a>. D. A. Robb, S. Padilla, T. S. Methven, B.Kalkreuter, M. J. Chantler. A Picture Paints a Thousand Words but Can it Paint Just One?<\/strong> DIS\u201916: Proceedings of the 2016 ACM Conference on Designing Interactive Systems<\/em>. dx.doi.org\/10.1145\/2901790.2901791<\/a>.<\/p>\n\n\n\n D. A. Robb, T. S. Methven, S. Padilla, M. J. Chantler. Well-Connected: Promoting Collaboration by Effective Networking<\/strong> CSCW\u201916:19th ACM Conference on Computer Supported Cooperative Work & Social Computing Proceedings Companion<\/em> dx.doi.org\/10.1145\/2818052.2874333<\/a>. D. A. Robb, S. Padilla, B. Kalkreuter, M. J. Chantler. Crowdsourced Feedback With Imagery Rather Than Text: Would Designers Use It?<\/strong> CHI\u201915: 33rd Annual ACM Conference on Human Factors in Computing Systems Proceedings<\/em> dx.doi.org\/10.1145\/2702123.2702470<\/a>.<\/p>\n\n\n\n D. A. Robb, S. Padilla, B. Kalkreuter, M. J. Chantler. Moodsource: Enabling Perceptual and Emotional Feedback from Crowds<\/strong> CSCW\u201915: 18th ACM Conference on Computer Supported Cooperative Work & Social Computing Proceedings Companion<\/em> dx.doi.org\/10.1145\/2685553.2702676<\/a>. My PhD work was on CDI “Head-Crowd” project<\/a>, an interdisciplinary project in the Schools of Mathematics and Computer Science (MACS) and Textile and Design (TEX). The focus is on perceptual image browsing, visual communication, visual summary and interpretation of visual design feedback.<\/p>\n\n\n\n Try the browser in its 7×5 stack format<\/a> as used in some of the more recent experiments which used a web app interface.<\/p>\n\n\n\n The SOM browser features in the recent publications<\/a> and additionally in following publications:<\/p>\n\n\n\n S. Padilla, D. Robb, F. Halley, M. J. Chantler, Browsing Abstract Art by Appearance<\/strong> S. Padilla, F. Halley, D. Robb, M. J. Chantler, Intuitive Large Image Database Browsing using Perceptual Similarity Enriched by Crowds<\/strong> Computer Analysis of Images and Patterns Lecture Notes in Computer Science Volume 8048, 2013, pp 169-176<\/em> Springer Link<\/a><\/p>\n\n\n\n The collective similarity judgements of human observers about the images can, perhaps, be better visualised in 3D \u201csimilarity\u201d space. The closer an image is to another; the more similar those images were judged to be by observers. Conversely, the farther away two images are, the less similar they were judged to be. The similarity data was used as input to construct the SOM browser.<\/p>\n\n\n\n The image summarisation also features in the CSCW’15 extended abstract<\/a> and additionally in following publications:<\/p>\n\n\n\n B. Kalkreuter and D. Robb. HeadCrowd: visual feedback for design<\/strong> in the Nordic Textile Journal, Special edition: Sustainability & Innovation in the Fashion Field, issue 1\/2012, ISSN 1404-2487, CTF Publishing, Bor\u00e5s, Sweden. Pages 70-81<\/em> Download PDF<\/a><\/p>\n\n\n\n B. Kalkreuter, D. Robb, S. Padilla, M. J. Chantler Managing Creative Conversations Between Designers and Consumers<\/strong> Future Scan 2: Collective Voices, Association of Fashion and Textile Courses Conference Proceedings 2013, ISBN 978-1-907382-64-2, Pages 90-99<\/em> Download PDF<\/a><\/p>\n\n\n\n A video<\/a> describes the experiment which features in the DIS\u201916 paper in the recent publications<\/a>. The terms, the raw term image selection lists, and the algorithmically generated summaries from the experiment can be viewed using this viewer web application: The Fb Viewer (V2) <\/a> is designed to be tablet (and ‘fablet’) friendly. It uses the latest jQueryMobile beta v 1.3. At the time of writing there is a slight problem with jQM v1.3 and Internet Explorer so if you are using Internet Explorer you may wish to view the desktop version<\/a>.<\/p>\n\n\n\n To allow more figurative communication, a second browser was built. 2000 images were categorized by tagging them with terms from an emotion model. Thus every image has a normalized emotion tag frequency profile (see image above) representing the judgments of 20 paid, crowdsourced, participants. Try the emotion image browser in its 7×5 stack format<\/a> as used in some of the more recent experiments which used a web app interface.<\/p>\n\n\n\n The emotion image browser features in the CHI’15 paper<\/a> and the CSCW’15 extended abstract<\/a>.<\/p>\n\n\n\n The crowdsourced visual feedback method (CVFM) was evaluated in a study with interior design students puting forward their designs for feedback and a group of student participants acting as the crowd giving feedback. The crowd were shown the designs in a random order and asked “How did the design make you feel?”. They were asked to give thier feedback in the form of abstract images, emotive images and text. In the recent publications<\/a><\/strong>, the CHI’15 paper reports the designer side of the evaluation and the CSCW’15 extended abstract reports the crowd side. Below is a link to a 30 second video which accompanies the CHI paper.
My Research Gate profile<\/a>.
My HWU Staff page<\/a>.<\/p>\n\n\n\nCurrent EPSRC\/UKRI funded post at the National Robotarium in Edinburgh<\/h2>\n\n\n\n
Work for the ORCA Hub funded by EPSRC and industry<\/h2>\n\n\n\n
I was Experimental Lead on the Human Robot Interaction theme of the ORCA Hub project (Offshore Robotics for Certification of Assets), funded by EPSRC and industry. The aims of the hub project can be found here:https:\/\/orcahub.org<\/a>. The ORCA Hub team included over 40 collaborating researchers; some here at the Edinburgh Centre for Robotics which is a collaboration between Heriot-Watt University and The University of Edinburgh; others at Imperial College London, University of Oxford, and University of Liverpool.<\/p>\n\n\n\n
Previously<\/h2>\n\n\n\n
Other Research Interests<\/h2>\n\n\n\n
Development work on well-sorted.org<\/h2>\n\n\n\n
Selected Recent Publications<\/h2>\n\n\n\n
on Multimodal Interaction, pp.495-496<\/em>.dx.doi.org\/10.1145\/3136755.3143022<\/a>.<\/p>\n\n\n\n
(See also dis2017.org\/awards<\/a>)<\/p>\n\n\n\n
See an example of this application in action near the bottom of this page here<\/a>.<\/p>\n\n\n\n
<\/p>\n\n\n\nPhD Research<\/h2>\n\n\n\n
Visual Crowd Communication<\/h2>\n\n\n\n
Moodsource: Enabling Perceptual and Emotional Feedback from Crowds<\/h3>\n\n\n\n
Part of my work on capturing visual feedback has involved building a perceptually relevant image browser populated with a set of abstract images. The images were screen-scraped from Flickr.com (See project acknowledgements below). Human perceptions of the relative similarity of the images were captured using techniques devised by Dr. Fraser Halley (See his PhD thesis under “PhD Thesis” in the Publications tab at the top of this page).<\/p>\n\n\n\nAbstract Image Set in the SOM Browser<\/h3>\n\n\n\n
<\/a>
The project abstract image set can be viewed in the Self Organising Map Browser. Images judged by observers as being highly similar to each other are grouped in stacks together. Adjacent stacks contain images judged more similar to each other than stacks farther apart. Note how the observers\u2019 similarity judgements and the SOM construction algorithm has resulted in apparently themed regions in the browser. e.g. architectural at the top right, and highly abstract, colourful at the top left. Click the thumbnail image above to try out an HTML version of the SOM browser. (In our experiment it was built it in Flash and deployed to iOS on iPads.)<\/p>\n\n\n\n
Predicting Perceptions: The 3rd International Conference on Appearance, 17-19 April, 2012, Edinburgh, UK. Conference Proceedings Publication ISBN: 978-1-4716-6869-2, Pages: 100-103<\/em> Download PDF<\/a><\/p>\n\n\n\nAbstract Image Set in a 3D MDS Visualisation<\/h3>\n\n\n\n
View the image set in a 3D view as an animated GIF<\/a> (8Mb)
<\/p>\n\n\n\nPerceptually Relevant Image Summaries<\/h3>\n\n\n\n
<\/a>
If a crowd were asked to give feedback by selecting images from the browser the gathered image selections might be so large in number as to overwhelm a designer seeking the feedback. To address this problem we developed an algorithm<\/a> to generate visual summaries consisting of representative images. The algorithm uses clustering based on the image selections and the human perceptual similarity data previously gathered on the image set. (This is the same data that is used to organise the abstract image browser). The algorithm is described in the CHI’15 paper<\/a>.<\/p>\n\n\n\nCommunication Experiment<\/h3>\n\n\n\n
To show that communication is possible using the crowdsourced visual feedback method (CVFM) we carried out an experiment in which a group of participants were shown terms and asked to choose images from the abstract image browser to represent those terms. Summaries were made from the gathered images. The raw term image selections and the summaries were shown to another group of participants who rated the degree to which they could see the meaning of the terms in the stimuli they were shown. The term weights output by the second group of participants allowed the effectiveness of the communication and of the summarisation to be measured.<\/p>\n\n\n\n<\/figure>\n\n\n\n
<\/a><\/p>\n\n\n\nEmotive Image Browser<\/h3>\n\n\n\n
<\/a>
Using these profiles, the set was filtered to 204 images covering a subset of emotions suited to design conversation. The emotive images are arranged in a SOM browser defined by the emotion profiles (frequency vectors) in a similar way to the abstract browser (based on similarity vectors).<\/p>\n\n\n\nEvaluation<\/h3>\n\n\n\n