{"id":1132,"date":"2012-02-10T13:19:56","date_gmt":"2012-02-10T13:19:56","guid":{"rendered":"http:\/\/www.macs.hw.ac.uk\/texturelab\/"},"modified":"2024-03-01T16:42:14","modified_gmt":"2024-03-01T16:42:14","slug":"david-robb","status":"publish","type":"page","link":"https:\/\/www.macs.hw.ac.uk\/texturelab\/people\/david-robb\/","title":{"rendered":"Texturelab Edinburgh \u2013 People &#8211; David Robb"},"content":{"rendered":"\n<h2>David Robb<br>PhD MSc BSc<\/h2>\n\n\n\n<p><a href=\"https:\/\/scholar.google.co.uk\/citations?user=1W980L0AAAAJ\">My Google Scholar profile<\/a>.<br><a href=\"https:\/\/www.researchgate.net\/profile\/David_Robb3\">My Research Gate profile<\/a>.<br><a href=\"https:\/\/researchportal.hw.ac.uk\/en\/persons\/david-robb\">My HWU Staff page<\/a>.<\/p>\n\n\n\n<p>I am a Research Fellow at Heriot-Watt University. I completed my PhD (Sept 2011 to Feb 2015) here at HWU.<\/p>\n\n\n\n<h2>Current EPSRC\/UKRI funded post at the National Robotarium in Edinburgh<\/h2>\n\n\n\n<p>I am currently working on the DeMILO project (EPSRC Developing Machine Learning-empowered Responsive Manufacture of Industrial Laser Systems), EPSRC HUman-machine teaming for Maritime Environments (HUME), and the UKRI Trustworthy Autonomous Systems Node in Trust at the National Robotarium in Edinburgh.<\/p>\n\n\n\n<h2>Work for the ORCA Hub funded by EPSRC and industry<\/h2>\n\n\n\n<p><img src=\"http:\/\/www.macs.hw.ac.uk\/texturelab\/files\/images\/dave\/ORCA-HUB-300xResRes.jpg\" alt=\"Logo - ORCA Hub funded by EPSRC and industry\"><br>I was Experimental Lead on the Human Robot Interaction theme of the ORCA Hub project (Offshore Robotics for Certification of Assets), funded by EPSRC and industry. The aims of the hub project can be found here:<a href=\"https:\/\/orcahub.org\/\">https:\/\/orcahub.org<\/a>. The ORCA Hub team included over 40 collaborating researchers; some here at the Edinburgh Centre for Robotics which is a collaboration between Heriot-Watt University and The University of Edinburgh; others at Imperial College London, University of Oxford, and University of Liverpool.<br><img src=\"http:\/\/www.macs.hw.ac.uk\/texturelab\/files\/images\/dave\/institutions-res.png\" alt=\"Institutions - ORCA Hub funded by EPSRC and industry\"><\/p>\n\n\n\n<h2>Previously<\/h2>\n\n\n\n<p>Previously I was working on <a href=\"https:\/\/miriamproject.github.io\/\">MIRIAM <\/a>(Multimodal Intelligent inteRactIon for Autonomous systeMs) funded by <a href=\"https:\/\/www.gov.uk\/government\/organisations\/defence-science-and-technology-laboratory\">dstl<\/a> under the <a href=\"https:\/\/www.gov.uk\/government\/organisations\/defence-and-security-accelerator\">Defence and Security Accelerator<\/a> theme, <a href=\"https:\/\/www.gov.uk\/government\/publications\/accelerator-themed-competition-revolutionise-the-human-information-relationship-for-defence\/competition-document-revolutionise-the-human-information-relationship-for-defence\"> Revolutionise the human information relationship for Defence<\/a>, with industry partners <a href=\"http:\/\/www.seebyte.com\/\">SeeByte<\/a> and <a href=\"http:\/\/www.tekever.com\/\">Tekever<\/a>.<\/p>\n\n\n\n<h2>Other Research Interests<\/h2>\n\n\n\n<p>My other research areas are:<\/p>\n\n\n\n<ul><li>promoting research collaboration through CSCW<\/li><li>user experience when using visualisations of complex data<\/li><li>image-based feedback.<\/li><\/ul>\n\n\n\n<h2>Development work on well-sorted.org<\/h2>\n\n\n\n<p>I created the additional tools deployed on the www.well-sorted.org website to allow live online input from breakout groups during meetings that have been organised using the original well-sorted pre-meeting idea organisation tools. The in-meeting tools allow meeting attendees to enter a record of their breakout group discussions live during the meeting while the meeting organiser can monitor progress of all the groups and later use the discussion record to support presentations of the group discussions. One part of the tools is an interactive supported networking session visualisation tool. An example of the output from this tool can be seen <a href=\"https:\/\/www.well-sorted.org\/explore\/UKHDANResearchChallenges\/\">here<\/a>. (Scroll down that page). That tool featured in a CSCW demo paper (<a href=\"http:\/\/dx.doi.org\/10.1145\/2818052.2874333\">ACM Digital Library link<\/a>). Youtube tutorial videos illustrating these in-meeting tools can be seen <a href=\"https:\/\/www.youtube.com\/channel\/UCi5QcnHJ7NPs9J_5cKd2BHg\">here<\/a> (see the playlist of videos entitled \u201cWell Sorted In-Meeting Tools\u201d).<\/p>\n\n\n\n<h2>Selected Recent Publications<\/h2>\n\n\n\n<p>D.A. Robb , J. Lopes, M.I. Ahmad, P. E. McKenna, X. Liu, K. Lohan and H. Hastie, 2023 <strong>Seeing Eye to Eye: Trustworthy Embodiment for Task-based Conversational Agents<\/strong>. Frontiers in Robotics and AI, August 2023, Sec. Human-Robot Interaction, Volume 10 &#8211; 2023, <a href=\"https:\/\/doi.org\/10.3389\/frobt.2023.1234767\">https:\/\/doi.org\/10.3389\/frobt.2023.1234767<\/a>.<\/p>\n\n\n\n<p>M. Y. Lim, D. A. Robb, B. W. Wilson, H. Hastie, 2023. <strong>Feeding the Coffee Habit: A Longitudinal Study of a Robo-Barista<\/strong><em>, RO-MAN&#8217;23,In Proceedings of the 32nd IEEE International Conference on Robot and Human Interactive Communication. <span style=\"color: #00947e; font-weight: bold;\">Awarded Winner IEEE RO-MAN 2023 KROS Interdisciplinary Research Award in Social Human-Robot Interaction.<a href=\"https:\/\/researchportal.hw.ac.uk\/en\/prizes\/ro-man-2023-winner-kros-interdisciplinary-research-award-in-socia\">(Link to Prize)<\/a><\/span><\/em><em>.<\/em> (<a href=\"https:\/\/arxiv.org\/abs\/2309.02942\">Author Accepted&nbsp; Manuscript version<\/a>) https:\/\/doi.org\/10.1109\/RO-MAN57019.2023.10309621<\/p>\n\n\n\n<p>M. Moujahid, D.A. Robb, C. Dondrup, H. Hastie, 2023. <strong>Come closer: The Effects of Robot Personality on Human Proxemics Behaviours<\/strong><em>, RO-MAN&#8217;23, In Proceedings of the 32nd IEEE International Conference on Robot and Human Interactive Communication. <span style=\"color: #00947e; font-weight: bold;\">Awarded Honourable Mention as Finalist in IEEE RO-MAN 2023 KROS Interdisciplinary Research Award in Social Human-Robot Interaction (<a href=\"https:\/\/researchportal.hw.ac.uk\/en\/prizes\/ro-man-2023-honourable-mention-as-a-finalist-in-the-kros-interdis\">Link to Prize<\/a>).<\/span><\/em> (<a href=\"https:\/\/arxiv.org\/abs\/2309.02979\">Author Accepted Manuscript version<\/a>). https:\/\/doi.org\/10.1109\/RO-MAN57019.2023.10309333<\/p>\n\n\n\n<p>I. Rakhmatulin, D. Risbridger, D. A. Robb, R. Carter, M. J. Chantler, and M. S.\u00a0 Erden, 2023 <strong>Addressing shortcomings in manual alignment of laser optics via automation tools<\/strong>. <em>CASE&#8217;23<\/em>, In IEEE International Conference on Automation Science and Engineering. https:\/\/doi.org\/10.1109\/CASE56687.2023.10260476<\/p>\n\n\n\n<p>D.A. Robb, X.Liu, H. Hastie, 2023. <strong>Explanation Styles for Trustworthy Autonomous Systems<\/strong><em>, AAMAS&#8217;23, In Proceedings of the 22nd International Conference on Autonomous Agents and Multiagent Systems.<\/em> (<a href=\"https:\/\/www.southampton.ac.uk\/~eg\/AAMAS2023\/pdfs\/p2298.pdf\">pdf open access from AAMAS proceedings<\/a>)&nbsp;<a href=\"https:\/\/dl.acm.org\/doi\/abs\/10.5555\/3545946.3598913\">https:\/\/dl.acm.org\/doi\/abs\/10.5555\/3545946.3598913<\/a><\/p>\n\n\n\n<p>Y. Dragostinov, D. Har\u00f0ard\u00f3ttir, P. E. McKenna, D. Robb, B. Nesset, M. I. Ahmad, M. Romeo, M.Y. Lim, C. Yu, Y. Jang, M. Diab, A. Cangelosi, Y. Demeris, H. Hastie, G. Rajendran, 2022 <strong>Preliminary psychometric scale development using the mixed methods Delphi technique<\/strong><em>, Methods in Psychology. p.100103<\/em> <a title=\"Published in ACM Digital Library\" href=\"https:\/\/doi.org\/10.1016\/j.metip.2022.100103\">https:\/\/doi.org\/10.1016\/j.metip.2022.100103<\/a>.<\/p>\n\n\n\n<p>M. Y. Lim, J. D. A. Lopes, D. A. Robb, B. W. Wilson, M. Moujahid, E. De Pellegrin, H. Hastie. <strong>We are all Individuals: The Role of Robot Personality and Human Traits in Trustworthy Interaction<\/strong><em>, RO-MAN&#8217;22, Proceedings of the 2022 IEEE International Conference on Robot and Human Interactive Communication. <span style=\"color: #00947e; font-weight: bold;\">Awarded IEEE RO-MAN 2022 KROS Interdisciplinary Research Award in Social Human-Robot Interaction. <\/span><\/em>&nbsp;<a title=\"Published in ACM Digital Library\" href=\"https:\/\/doi.org\/10.1109\/RO-MAN53752.2022.9900772\">https:\/\/doi.org\/10.1109\/RO-MAN53752.2022.9900772<\/a>.<\/p>\n\n\n\n<p>B. Nesset, D.A. Robb, J. D. A. Lopes, H. Hastie. <strong>Transparency in HRI: Trust and Decision Making in the Face of Robot Errors<\/strong><em>, HRI&#8217;21, Proceedings of the 2021 ACM\/IEEE International Conference on Human-Robot Interaction.<\/em> <a title=\"Published in ACM Digital Library\" href=\"https:\/\/doi.org\/10.1145\/3434074.3447183\"> https:\/\/doi.org\/10.1145\/3434074.3447183<\/a>.<\/p>\n\n\n\n<p>M.I. Ahmad, I. Keller, Robb, D. A. and K. Lohan, 2020. <strong>A framework to estimate cognitive load using physiological data<\/strong>. Personal and Ubiquitous Computing, 1-15. <a href=\"https:\/\/doi.org\/10.1007\/s00779-020-01455-7\">https:\/\/doi.org\/10.1007\/s00779-020-01455-7<\/a><\/p>\n\n\n\n<p>D.A. Robb, M.I. Ahmad, C. Tiseo, S. Aracri, A. C. McConnell, V. Page, C. Dondrup, F. J. Chiyah Garcia, H.-N. Nguyen, \u00c8. Pairet, P. Ard\u00f3n Ram\u00edrez, T. Semwal, H.M. Taylor, L.J. Wilson, D. Lane, H. Hastie, K. Lohan. <strong>Robots in the Danger Zone: Exploring Public Perception through Engagement<\/strong><em>, HRI&#8217;20, Proceedings of the 2020 ACM\/IEEE International Conference on Human-Robot Interaction.<\/em> <a title=\"Published in ACM Digital Library\" href=\"https:\/\/doi.org\/10.1145\/3319502.3374789\"> https:\/\/doi.org\/10.1145\/3319502.3374789<\/a>.<\/p>\n\n\n\n<p>H. Hastie, D.A. Robb, J. Lopes, M. Ahmad, P. Le Bras, X. Liu, R.P.A. Petrick, M. J. Chantler. <strong>Challenges in Collaborative HRI for Remote Robot Teams<\/strong><em>, CHI 2019 Workshop: The Challenges of Working on Social Robots that Collaborate with People (SIRCHI2019), ACM CHI Conference on Human Factors in Computing Systems.<\/em> <a title=\"Published in Arxiv\" href=\"https:\/\/arxiv.org\/abs\/1905.07379\"> https:\/\/arxiv.org\/abs\/1905.07379<\/a>.<\/p>\n\n\n\n<p>F. J. Chiyah Garcia, D.A. Robb, H. Hastie. <strong>Explainable Autonomy through Natural Language<\/strong><em>, ES4CPS2019, the Report of the GI-Dagstuhl Seminar 19023 on Explainable Software for Cyber-Physical Systems, Schloss Dagstuhl, Germany. (<a title=\"Workshop website\" href=\"https:\/\/thomas-vogel.github.io\/ES4CPS\/\" target=\"_blank\" rel=\"noopener\">Workshop website<\/a>).<\/em> <a title=\"Published in Arxiv\" href=\"https:\/\/arxiv.org\/abs\/1904.11851\"> https:\/\/arxiv.org\/abs\/1904.11851<\/a>.<\/p>\n\n\n\n<p>D.A. Robb, J. Lopes, S. Padilla, , A. Laskov, F. J. Chiyah Garcia, X. Liu, J.S. Willners, N. Valeyrie, K. S. Lohan, D. Lane, P. Patron, Y. Petillot, M. J. Chantler, H. Hastie. <strong>Exploring Interaction with Remote Autonomous Systems using Conversational Agents<\/strong><em>, DIS\u201919, Proceedings of the 2019 ACM Conference on Designing Interactive Systems.<\/em> <a title=\"Published in ACM Digital Library\" href=\"https:\/\/doi.org\/10.1145\/3322276.3322318\"> https:\/\/doi.org\/10.1145\/3322276.3322318<\/a>.<\/p>\n\n\n\n<p>D.A. Robb, J.S. Willners, N. Valeyrie, F. J. Chiyah Garcia, A. Laskov, X. Liu, P. Patron, H. Hastie, Y. Petillot. <strong> A Natural Language Interface and Relayed Acoustic Communications for Improved Command and Control of AUVs<\/strong><em>, AUV2018, Proceedings of the 2018 IEEE\/OES Autonomous Underwater Vehicle Symposium <\/em><a title=\"pdf at Arxiv\" href=\"http:\/\/arxiv.org\/abs\/1811.03566?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+arxiv%2FQSXk+%28ExcitingAds%21+cs+updates+on+arXiv.org%29\">pdf at Arxiv<\/a>. Definitive version is <a title=\"Definitive version IEEExplore\" href=\"https:\/\/ieeexplore.ieee.org\/document\/8729778\">here at IEEExplore<\/a>.<\/p>\n\n\n\n<p>F. J. Chiyah Garcia, D.A. Robb, X. Liu, A. Laskov, P. Patron, H. Hastie.<strong> Explainable Autonomy: A Study of Explanation Styles for Building Clear Mental Models<\/strong><em>, INLG\u201918, Proceedings of the 11th International Conference on Natural Language Generation <\/em>, ACL. <a title=\"HWU repository page\" href=\"https:\/\/researchportal.hw.ac.uk\/en\/publications\/explainable-autonomy-a-study-of-explanation-styles-for-building-c\">HWU repository page<\/a>, <a title=\"pdf at HWU repository\" href=\"https:\/\/pureapps2.hw.ac.uk\/ws\/portalfiles\/portal\/24138632\/GarciaINLG2018final.pdf\">pdf at HWU repository<\/a>, <a title=\"pdf at ACL\" href=\"https:\/\/aclweb.org\/anthology\/W18-6511\">pdf at ACL<\/a>.<\/p>\n\n\n\n<p>D.A. Robb, F. J. Chiyah Garcia, A. Laskov, X. Liu, P. Patron, H. Hastie. <strong>Keep Me in the Loop: Increasing Operator Situation Awareness through a Conversational Multimodal Interface<\/strong><em>, ICMI\u201918, Proceedings of 20th ACM International Conference on Multimodal Interaction. <\/em><a title=\"Published in ACM Digital Library\" href=\"https:\/\/doi.org\/10.1145\/3242969.3242974\">https:\/\/doi.org\/10.1145\/3242969.3242974<\/a>.<\/p>\n\n\n\n<p>H. Hastie, F. J. Chiyah Garcia, D.A. Robb, P. Patron, A. Laskov. <strong>MIRIAM: a multimodal chat-based interface for autonomous systems<\/strong><em>, ICMI\u201917, Proceedings of 19th ACM International Conference<br>on Multimodal Interaction, pp.495-496<\/em>.<a title=\"MIRIAM: a multimodal chat-based interface for autonomous systems\" href=\"https:\/\/dl.acm.org\/authorize?N41677\"><img loading=\"lazy\" src=\"http:\/\/dl.acm.org\/images\/oa.gif\" alt=\"ACM DL Author-ize service\" width=\"25\" height=\"25\" border=\"0\">dx.doi.org\/10.1145\/3136755.3143022<\/a>.<\/p>\n\n\n\n<p>D. A. Robb, S. Padilla, T. S. Methven, B.Kalkreuter, M. J. Chantler. <strong>Image-based Emotion Feedback: How Does the Crowd Feel? And Why?<\/strong> <em>DIS\u201917: Proceedings of the 2017 ACM Conference on Designing Interactive Systems<\/em>. <span style=\"color: #00947e; font-weight: bold;\">Awarded Honourable Mention ACM DIS 2017 Research Papers and Notes<\/span>. <a href=\"http:\/\/dx.doi.org\/10.1145\/3064663.3064665\" target=\"_blank\" rel=\"noopener\">dx.doi.org\/10.1145\/3064663.3064665<\/a>. <!--Or <a href=\"http:\/\/www.macs.hw.ac.uk\/texturelab\/files\/publications\/papers\/Papers_PDF\/Image-basedEmotionFeedbackHowDoestheCrowdFeelAndWhy.pdf\" target=\"_blank\" rel=\"noopener\">download with this link<\/a>--><br>(See also <a href=\"http:\/\/dis2017.org\/awards\/\" target=\"_blank\" rel=\"noopener\">dis2017.org\/awards<\/a>)<\/p>\n\n\n\n<p>D. A. Robb, S. Padilla, T. S. Methven, B.Kalkreuter, M. J. Chantler. <strong>A Picture Paints a Thousand Words but Can it Paint Just One?<\/strong> <em>DIS\u201916: Proceedings of the 2016 ACM Conference on Designing Interactive Systems<\/em>. <a href=\"http:\/\/dx.doi.org\/10.1145\/2901790.2901791\" target=\"_blank\" rel=\"noopener\">dx.doi.org\/10.1145\/2901790.2901791<\/a>.<\/p>\n\n\n\n<p>D. A. Robb, T. S. Methven, S. Padilla, M. J. Chantler. <strong>Well-Connected: Promoting Collaboration by Effective Networking<\/strong> <em>CSCW\u201916:19th ACM Conference on Computer Supported Cooperative Work &amp; Social Computing Proceedings Companion<\/em> <a title=\"CSCW'16 extended abstract\" href=\"http:\/\/dx.doi.org\/10.1145\/2818052.2874333\">dx.doi.org\/10.1145\/2818052.2874333<\/a>.<br>See an example of this application in action near the bottom of this page <a title=\"UK Health Data Analytics Workshop\" href=\"https:\/\/www.well-sorted.org\/explore\/UKHDANResearchChallenges\/#mixap\" target=\"_blank\" rel=\"noopener\">here<\/a>.<\/p>\n\n\n\n<p>D. A. Robb, S. Padilla, B. Kalkreuter, M. J. Chantler. <strong>Crowdsourced Feedback With Imagery Rather Than Text: Would Designers Use It?<\/strong> <em>CHI\u201915: 33rd Annual ACM Conference on Human Factors in Computing Systems Proceedings<\/em> <a title=\"CHI'15 paper\" href=\"http:\/\/dx.doi.org\/10.1145\/2702123.2702470\">dx.doi.org\/10.1145\/2702123.2702470<\/a>.<\/p>\n\n\n\n<p>D. A. Robb, S. Padilla, B. Kalkreuter, M. J. Chantler. <strong>Moodsource: Enabling Perceptual and Emotional Feedback from Crowds<\/strong> <em>CSCW\u201915: 18th ACM Conference on Computer Supported Cooperative Work &amp; Social Computing Proceedings Companion<\/em> <a title=\"CSCW'15 extended abstract\" href=\"http:\/\/dx.doi.org\/10.1145\/2685553.2702676\">dx.doi.org\/10.1145\/2685553.2702676<\/a>.<br><!--Try the  <a title=\"CSCW'15 Demo - Best to use Chrome or Firefox\" target=\"_blank\" href=\"http:\/\/www.macs.hw.ac.uk\/~dar14\/fb4\/MoSoDemo01.php\" rel=\"noopener\">Demo \ud83d\ude42<\/a>. Best to use Chrome or Firefox rather than IE.--><\/p>\n\n\n\n<h2>PhD Research<\/h2>\n\n\n\n<p>My PhD work was on <a title=\"CDI Crowds Project\" href=\"http:\/\/cdi.hw.ac.uk\/crowd-design\/\">CDI &#8220;Head-Crowd&#8221; project<\/a>, an interdisciplinary project in the Schools of Mathematics and Computer Science (MACS) and Textile and Design (TEX). The focus is on perceptual image browsing, visual communication, visual summary and interpretation of visual design feedback.<\/p>\n\n\n\n<h2>Visual Crowd Communication<\/h2>\n\n\n\n<h3>Moodsource: Enabling Perceptual and Emotional Feedback from Crowds<\/h3>\n\n\n\n<p><img loading=\"lazy\" src=\"http:\/\/www.macs.hw.ac.uk\/~dar14\/tlabDR\/diag_cvfm300dpi13cmWide.png\" alt=\"Moodsource: Enabling Perceptual and Emotional Feedback from Crowds\" width=\"1535\" height=\"789\"><br>Part of my work on capturing visual feedback has involved building a perceptually relevant image browser populated with a set of abstract images. The images were screen-scraped from Flickr.com (See project acknowledgements below). Human perceptions of the relative similarity of the images were captured using techniques devised by Dr. Fraser Halley (See his PhD thesis under &#8220;PhD Thesis&#8221; in the Publications tab at the top of this page).<\/p>\n\n\n\n<h3>Abstract Image Set in the SOM Browser<\/h3>\n\n\n\n<p><a title=\"Abstract images in SOM browser\" href=\"http:\/\/www.macs.hw.ac.uk\/~dar14\/fb1_stage2\/augSOM\/\" target=\"_blank\" rel=\"noopener\"><br><img loading=\"lazy\" src=\"http:\/\/www.macs.hw.ac.uk\/~dar14\/tlabDR\/SOMviewres25.png\" alt=\"SOM thumb image\" width=\"272\" height=\"204\"><br><\/a><br>The project abstract image set can be viewed in the Self Organising Map Browser. Images judged by observers as being highly similar to each other are grouped in stacks together. Adjacent stacks contain images judged more similar to each other than stacks farther apart. Note how the observers\u2019 similarity judgements and the SOM construction algorithm has resulted in apparently themed regions in the browser. e.g. architectural at the top right, and highly abstract, colourful at the top left. Click the thumbnail image above to try out an HTML version of the SOM browser. (In our experiment it was built it in Flash and deployed to iOS on iPads.)<\/p>\n\n\n\n<p>Try the browser in its <a title=\"Abstract images in 7x5 SOM browser\" href=\"http:\/\/www.macs.hw.ac.uk\/~dar14\/fb4\/somJqmTestForDemo.php#SomAbstract\" target=\"_blank\" rel=\"noopener\">7&#215;5 stack format<\/a> as used in some of the more recent experiments which used a web app interface.<\/p>\n\n\n\n<p>The SOM browser features in the <a title=\"latest\" href=\"#latest\">recent publications<\/a> and additionally in following publications:<\/p>\n\n\n\n<p>S. Padilla, D. Robb, F. Halley, M. J. Chantler, <strong>Browsing Abstract Art by Appearance<\/strong><br><em>Predicting Perceptions: The 3rd International Conference on Appearance, 17-19 April, 2012, Edinburgh, UK. Conference Proceedings Publication ISBN: 978-1-4716-6869-2, Pages: 100-103<\/em> <a href=\"http:\/\/www.perceptions.macs.hw.ac.uk\/papers\/Browsing_Abstract_Art_by_Appearance.pdf\">Download PDF<\/a><\/p>\n\n\n\n<p>S. Padilla, F. Halley, D. Robb, M. J. Chantler, <strong>Intuitive Large Image Database Browsing using Perceptual Similarity Enriched by Crowds<\/strong> <em>Computer Analysis of Images and Patterns Lecture Notes in Computer Science Volume 8048, 2013, pp 169-176<\/em> <a href=\"http:\/\/link.springer.com\/chapter\/10.1007%2F978-3-642-40246-3_21\">Springer Link<\/a><\/p>\n\n\n\n<h3>Abstract Image Set in a 3D MDS Visualisation<\/h3>\n\n\n\n<p><img loading=\"lazy\" src=\"http:\/\/www.macs.hw.ac.uk\/~dar14\/tlabDR\/X3Dscreenres25.png\" alt=\"x3D thumb image\" width=\"308\" height=\"264\"><br>View the image set in a 3D view as an <a href=\"http:\/\/www.macs.hw.ac.uk\/~dar14\/hcex\/NonMetricMDS-Octagaview05.gif\" target=\"_blank\" rel=\"noopener\">animated GIF<\/a> (8Mb)<br><!--\n(To view the image set still more closely you can use this <a title=\"Abstract image set non-metric MDS 3D viewer\" href=\"http:\/\/www.macs.hw.ac.uk\/~dar14\/fb1_stage2\/NewAbstract500SimMDSNONMETRICmds3D.x3d\">x3D MDS browser<\/a>. You will need to install the x3D Octaga Player plugin. Once the plugin is installed you can view it in your web browser. You will have an easier time of it if you use Internet Explorer for this. You might want to try out this very simple <a title=\"Hello World. An x3D example courtesy of W3C\" href=\"http:\/\/www.macs.hw.ac.uk\/~dar14\/tlabDR\/HelloWorld.x3d\">Hello World <\/a> x3D example first, which is a much smaller download than the image set visualisation.)\n--><\/p>\n\n\n\n<p>The collective similarity judgements of human observers about the images can, perhaps, be better visualised in 3D \u201csimilarity\u201d space. The closer an image is to another; the more similar those images were judged to be by observers. Conversely, the farther away two images are, the less similar they were judged to be. The similarity data was used as input to construct the SOM browser.<\/p>\n\n\n\n<h3>Perceptually Relevant Image Summaries<\/h3>\n\n\n\n<p><a title=\"Expand me to see: (1)Full image set (2) images chosen to represent, SMOOTH, sized by popularity (3) one k-means cluster (4) 2D non-overlapping summary\" href=\"http:\/\/www.macs.hw.ac.uk\/~dar14\/tlabDR\/summarisation_fig_combo.png\"><br><img loading=\"lazy\" src=\"http:\/\/www.macs.hw.ac.uk\/~dar14\/tlabDR\/solidCollageThumb.png\" alt=\"collage image\" width=\"308\" height=\"264\"><br><\/a><br>If a crowd were asked to give feedback by selecting images from the browser the gathered image selections might be so large in number as to overwhelm a designer seeking the feedback. To address this problem we developed an <a title=\"image to illustrate algorithm\" href=\"http:\/\/www.macs.hw.ac.uk\/~dar14\/tlabDR\/summarisation_fig_combo.png\">algorithm<\/a> to generate visual summaries consisting of representative images. The algorithm uses clustering based on the image selections and the human perceptual similarity data previously gathered on the image set. (This is the same data that is used to organise the abstract image browser). The algorithm is described in the <a title=\"CHI'15 paper\" href=\"http:\/\/dx.doi.org\/10.1145\/2702123.2702470\">CHI&#8217;15 paper<\/a>.<\/p>\n\n\n\n<p>The image summarisation also features in the <a title=\"CSCW'15 extended abstract\" href=\"http:\/\/dx.doi.org\/10.1145\/2685553.2702676\">CSCW&#8217;15 extended abstract<\/a> and additionally in following publications:<\/p>\n\n\n\n<p>B. Kalkreuter and D. Robb. <strong>HeadCrowd: visual feedback for design<\/strong> in <em>the Nordic Textile Journal, Special edition: Sustainability &amp; Innovation in the Fashion Field, issue 1\/2012, ISSN 1404-2487, CTF Publishing, Bor\u00e5s, Sweden. Pages 70-81<\/em> <a href=\"http:\/\/bada.hb.se\/bitstream\/2320\/12351\/1\/NJ2012.pdf\" target=\"_blank\" rel=\"noopener\">Download PDF<\/a><\/p>\n\n\n\n<p>B. Kalkreuter, D. Robb, S. Padilla, M. J. Chantler <strong>Managing Creative Conversations Between Designers and Consumers<\/strong> <em>Future Scan 2: Collective Voices, Association of Fashion and Textile Courses Conference Proceedings 2013, ISBN 978-1-907382-64-2, Pages 90-99<\/em> <a href=\"http:\/\/www.macs.hw.ac.uk\/texturelab\/files\/publications\/papers\/Papers_PDF\/Kalkreuter_etal_ftc_2013.pdf\" target=\"_blank\" rel=\"noopener\">Download PDF<\/a><\/p>\n\n\n\n<h3>Communication Experiment<\/h3>\n\n\n\n<p><img loading=\"lazy\" src=\"http:\/\/www.macs.hw.ac.uk\/texturelab\/files\/images\/dave\/ComExptOverview.png\" alt=\"collage image\" width=\"558\" height=\"391\"><br>To show that communication is possible using the crowdsourced visual feedback method (CVFM) we carried out an experiment in which a group of participants were shown terms and asked to choose images from the abstract image browser to represent those terms. Summaries were made from the gathered images. The raw term image selections and the summaries were shown to another group of participants who rated the degree to which they could see the meaning of the terms in the stimuli they were shown. The term weights output by the second group of participants allowed the effectiveness of the communication and of the summarisation to be measured.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img src=\"http:\/\/www.macs.hw.ac.uk\/texturelab\/files\/images\/dave\/DIS16paperVideoScreenieRes.png\" alt=\"Title screen of video. Links to video.\"\/><\/figure>\n\n\n\n<p><a title=\"Video (Duration 3 mins)\" href=\"http:\/\/dl.acm.org\/ft_gateway.cfm?id=2901791&amp;type=mp4&amp;path=%2F2910000%2F2901791%2Fsupp%2Fpn102%2Emp4&amp;supp=1&amp;dwn=1&amp;CFID=634949787&amp;CFTOKEN=92245714\" target=\"_blank\" rel=\"noopener\"><br><br><\/a><\/p>\n\n\n\n<p>A <a title=\"Video\" href=\"http:\/\/dl.acm.org\/ft_gateway.cfm?id=2901791&amp;type=mp4&amp;path=%2F2910000%2F2901791%2Fsupp%2Fpn102%2Emp4&amp;supp=1&amp;dwn=1&amp;CFID=634949787&amp;CFTOKEN=92245714\">video<\/a> describes the experiment which features in the DIS\u201916 paper in the <a title=\"latest\" href=\"#latest\">recent publications<\/a>. The terms, the raw term image selection lists, and the algorithmically generated summaries from the experiment can be viewed using this viewer web application: <a title=\"Feedback Viewer app\" href=\"http:\/\/www.macs.hw.ac.uk\/~dar14\/fb1_stage2\/fb1_s2Mobile01NmMDS.php\" target=\"_blank\" rel=\"noopener\">The Fb Viewer (V2) <\/a> is designed to be tablet (and &#8216;fablet&#8217;) friendly. It uses the latest jQueryMobile beta v 1.3. At the time of writing there is a slight problem with jQM v1.3 and Internet Explorer so if you are using Internet Explorer you may wish to view the <a title=\"Feedback Viewer app dektop version\" href=\"http:\/\/www.macs.hw.ac.uk\/~dar14\/fb1_stage2\/fb1_s2New03NonmetricMDS.php\" target=\"_blank\" rel=\"noopener\">desktop version<\/a>.<\/p>\n\n\n\n<h3>Emotive Image Browser<\/h3>\n\n\n\n<p>To allow more figurative communication, a second browser was built. 2000 images were categorized by tagging them with terms from an emotion model. Thus every image has a normalized emotion tag frequency profile (see image above) representing the judgments of 20 paid, crowdsourced, participants.<br><a title=\"Emotive image set\" href=\"http:\/\/www.macs.hw.ac.uk\/~dar14\/fb3\/eciRes\/Som-balByTermTop13fltrV3At35Pop204-ClstrLocn7x5-eciQc3pt1FullE2kMTTOFXDExRjctd\/\"><br><img loading=\"lazy\" src=\"http:\/\/www.macs.hw.ac.uk\/~dar14\/tlabDR\/imageID103EmotionProfile.png\" alt=\"Emotion image with profile\" width=\"275\" height=\"286\"><br><\/a><br>Using these profiles, the set was filtered to 204 images covering a subset of emotions suited to design conversation. The emotive images are arranged in a SOM browser defined by the emotion profiles (frequency vectors) in a similar way to the abstract browser (based on similarity vectors).<\/p>\n\n\n\n<p>Try the emotion image browser in its <a title=\"Emotive images in 7x5 SOM browser\" href=\"http:\/\/www.macs.hw.ac.uk\/~dar14\/fb4\/somJqmTestForDemo.php#SomEmotive\" target=\"_blank\" rel=\"noopener\">7&#215;5 stack format<\/a> as used in some of the more recent experiments which used a web app interface.<\/p>\n\n\n\n<p>The emotion image browser features in the <a title=\"CHI'15 paper\" href=\"http:\/\/dx.doi.org\/10.1145\/2702123.2702470\">CHI&#8217;15 paper<\/a> and the <a title=\"CSCW'15 extended abstract\" href=\"http:\/\/dx.doi.org\/10.1145\/2685553.2702676\">CSCW&#8217;15 extended abstract<\/a>.<\/p>\n\n\n\n<h3>Evaluation<\/h3>\n\n\n\n<p>The crowdsourced visual feedback method (CVFM) was evaluated in a study with&nbsp; interior design students puting forward their designs for feedback and a group of student participants acting as the crowd giving feedback. The crowd were shown the designs in a random order and asked &#8220;How did the design make you feel?&#8221;. They were asked to give thier feedback in the form of abstract images, emotive images and text. In the <strong><a title=\"latest\" href=\"#latest\">recent publications<\/a><\/strong>, the CHI&#8217;15 paper reports the designer side of the evaluation and the CSCW&#8217;15 extended abstract reports the crowd side. Below is a link to a 30 second video which accompanies the CHI paper.<br><iframe loading=\"lazy\" title=\"Video. Duration 30 secs.\" src=\"https:\/\/www.youtube.com\/embed\/8PIfMwpx2Qc?\" width=\"420\" height=\"345\"><br \/><\/iframe><\/p>\n\n\n\n<h2>Further Work on Image-based Emotion Feedback and Cognitive Styles<\/h2>\n\n\n\n<p>In 2016 I had an opportunity to investigate the idea put forward in the <a title=\"CSCW'15 extended abstract\" href=\"http:\/\/dx.doi.org\/10.1145\/2685553.2702676\">CSCW&#8217;15 extended abstract<\/a> that perhaps the experience of the feedback-givers in the crowd was influenced by their cognitive styles. I recruited 50 internet users from 19 to 77 years old, measured their cognititive styles with <a href=\"#Blazhenkova &amp; Kozhevnikov\">Blazhenkova &amp; Kozhevnikov&#8217;s (2009) OSIVQ<\/a> self report questionnare. They then did a feedback task rating three feedback formats (abstract images, emotion images and text). They also completed a post-task survey of mainly open questions. We found that engagement for the emotion images was significantly positively correlated with the degree to which participants were more visual than verbal in cognitive style. This is reported in a <a title=\"DIS'17 paper\" href=\"http:\/\/dx.doi.org\/10.1145\/3064663.3064665\">DIS&#8217;17 paper<\/a>.<\/p>\n\n\n\n<h2>Acknowledgments<\/h2>\n\n\n\n<p>Image sets:-<br>The project has established two databases of images for use in visual feedback. The images in the databases have been sourced from Google and Flickr and all have a Creative Commons licences. These contributors are acknowledged below.<\/p>\n\n\n\n<p><a title=\"Acknowledgement of Creative Commons images\" href=\"http:\/\/www.macs.hw.ac.uk\/texturelab\/ack\/\">Acknowledgement of Creative Commons images<\/a><br><a title=\"Databases page\" href=\"http:\/\/www.macs.hw.ac.uk\/texturelab\/resources\/databases\/\">The databases can be downloaded from here<\/a><\/p>\n\n\n\n<h2>References<\/h2>\n\n\n\n<p><a title=\"Blazhenkova &amp; Kozhevnikov 2009 ref.\" name=\"Blazhenkova &amp; Kozhevnikov\"><\/a>Olesya Blazhenkova and Maria Kozhevnikov. 2009. The new object-spatial-verbal cognitive style model: Theory and measurement. Applied Cognitive Psychology, 23(5), 638-663.<a title=\"google search\" href=\"https:\/\/scholar.google.co.uk\/scholar?q=Olesya+Blazhenkova+and+Maria+Kozhevnikov.+2009.The+new+object-spatial-verbal+cognitive+style+model%3A+Theory+and+measurement&amp;btnG=&amp;as_sdt=1%2C5&amp;as_sdtp=\" target=\"_blank\" rel=\"noopener\">Search for this paper<\/a><\/p>\n\n\n\n<h2>MSc<\/h2>\n\n\n\n<p>I did my MSc here at Heriot-Watt. I made a rich web application called <em>The Dendrogrammer<\/em> as part of my dissertation. Here is a <a title=\"Dendrogrammer demo\" href=\"http:\/\/www.macs.hw.ac.uk\/~dar14\/project\/dendrogrammer\/version1_0_5\/dendrogrammer.php?read=goodDemo.dat\" target=\"_blank\" rel=\"noopener\">link to a demo<\/a> of that app.<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>David RobbPhD MSc BSc My Google Scholar profile.My Research Gate profile.My HWU Staff page. I am a Research Fellow at Heriot-Watt University. I completed my PhD (Sept 2011 to Feb 2015) here at HWU. Current EPSRC\/UKRI funded post at the &hellip; <a href=\"https:\/\/www.macs.hw.ac.uk\/texturelab\/people\/david-robb\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"parent":12,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":[],"_links":{"self":[{"href":"https:\/\/www.macs.hw.ac.uk\/texturelab\/wp-json\/wp\/v2\/pages\/1132"}],"collection":[{"href":"https:\/\/www.macs.hw.ac.uk\/texturelab\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.macs.hw.ac.uk\/texturelab\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.macs.hw.ac.uk\/texturelab\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.macs.hw.ac.uk\/texturelab\/wp-json\/wp\/v2\/comments?post=1132"}],"version-history":[{"count":347,"href":"https:\/\/www.macs.hw.ac.uk\/texturelab\/wp-json\/wp\/v2\/pages\/1132\/revisions"}],"predecessor-version":[{"id":2385,"href":"https:\/\/www.macs.hw.ac.uk\/texturelab\/wp-json\/wp\/v2\/pages\/1132\/revisions\/2385"}],"up":[{"embeddable":true,"href":"https:\/\/www.macs.hw.ac.uk\/texturelab\/wp-json\/wp\/v2\/pages\/12"}],"wp:attachment":[{"href":"https:\/\/www.macs.hw.ac.uk\/texturelab\/wp-json\/wp\/v2\/media?parent=1132"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}