top of page

Unique Emotion Design for Autonomous Delivery Robots

  • Writer: melek akın
    melek akın
  • Jan 28, 2022
  • 11 min read

Updated: Feb 14


This Speculative Design project aimed to explore the autonomous delivery robots’ specific perception of the world through various factors: their purpose, intent, state, mood, personality, attention, responsiveness, intelligence, and capabilities, and through this understanding, design unique robot emotions that will foster them to communicate their unique needs.


Brief Explanation


Delivery robots and other similar systems will have to find their way among us as they navigate the streets to fulfil their function. And yet, such systems still struggle to fit in in such social spaces. For this Speculative Design project, I argue that emotions, given their role as a fundamental communicative tool, could be imbued into robots; that it would be easier to be considerate of robots that are intrinsically driven by emotions. More specifically, as robots have their own use-case specific concerns and mode of understanding the world, I argue that robots should be imbued with their own robot-specific emotions.


To explore these ideas, I conduct multiple design exploration techniques and methodologies into the design of emotions for a delivery robot.


I align the communicative role of emotions with the current communicative struggles faced by robots in the context of social navigation. Based on this, I applied Imaginary Introspection and Experience Prototyping to identify the specific concerns and needs of autonomous delivery robots on sidewalks. I then prototype four emotions from this perspective in terms of their appraisal structure, with a focus on their appraisal and action tendency. Lastly, I create videos of a robot platform driven by these emotions in a range of scenarios; a small online survey (40 respondents) and reflexive thematic analysis suggest that people can actually distinguish the appraisal structure of our designed emotions from that of similar human emotions.


These design explorations demonstrate that it possible to design new robot-specific emotions into a robot. In addition, they provide one pathway towards designing robot-specific emotions in context. So that, perhaps, one day, robot concerns can be felt through their own set of emotions.





 


Emotional Response


Understanding concern is of considerable value when understanding emotions. Emotions form the prime material in the exploration of an individual’s concerns. What matters for individuals affects their emotions, internal sensation experience, and behavioural reactions.





Robotics


Robots are increasingly moving from the laboratory out to the field. Technological developments open up new horizons for vastly extended robotics and automation applications in production and beyond the factor, where robots will need various interactions to communicate with people. Plentiful different types of robots are incorporated in a variety of fields. More specifically, they are entering further into the social field of humans, inhabiting offices, hospitals, museums, houses and increasingly streets.


The robotic future takes shape as something that is at the same time inescapable and yet somewhat intangible; evolving around opposing clear visions of robots as socially meaningful machines which will integrate into society, and as a threat that disrupts foundational beliefs in the role of machines as ‘human tools’, shaking society to its core


Hence, the complex nature of coexistence scenarios emerging from the diffusion of these artefacts points out the need to systematically envision how this near future might look. In this regard, the design discipline can play a proactive role by providing methods and tools for supporting speculation about possible futures, fostering reflections on potential structures that might support these futures, and enabling a more conscious shaping of intelligent and autonomous artefact.



Designing Human-Robot Interactions


Any robot inherently displays an interplay between its surface appearance and its physical motion. The way a robot looks sets the context for the interaction, framing expectations, triggering emotional responses, and evoking interaction affordances. Accompanying the robot`s appearance and movement is also critical for conveying more dynamic information about the robot. The robot’s movement in space can support action coordination, communicate intentions, display internal states, and have an emotional impact.


Additionally, it is equally essential to consider unintended forms of interactions, which involve whoever happens to share the environment with the robot, such as passers-by. This may help prevent negative interactions typical of the urban environment, such as vandalism, which can compromise robot safety, especially in autonomous robots. Acts of vandalism are often targeted at street furniture (e.g. benches, street lamps, flower beds) or other objects present in the urban environment, such as cars and buildings.





A role-playing online card game to understand delivery robot concerns


The goal of this game is to interpret the autonomous delivery robot’s experience and its context. In order to uncover the values of delivery robots, it is essential to build upon the personal interpretation of the participants‘ imaginary experiences from the first-person perspective. Therefore, an interactive role-playing game was initiated. The questions posed during the interviews mainly concentrated on exploring the “feeling”and “thinking” of people within their experience, aiming to sensitise the participant and gain a shared understanding of what the participant perceives as being an autonomous delivery robot.


A semi-structured interviewing method was chosen for conducting the study, which allows for asking open-ended questions and gathering in-depth accounts of people’s experiences and the freedom to explore emerging topics of interest during the process.


A set of card decks of three different steps of delivery were prepared to help and stimulate participants during the process. The design of the tools was created partly based on the previous research. The three steps were as follows; starting with loading the packages into the delivery robot, and then the journey on the sidewalks, and as a final step delivering the package to the end customer.



Participants:

17 participants` from differentiated educational and cultural background was a critical aspect for inviting them for the final interview. The selected participants could articulate, verbally or in certain more creative forms, the experience as it is lived and feel comfortable sharing personal and sometimes even intimate information about their lives and inner worlds. The participants were briefly pre-trained for performing imaginary introspection effectively.


Procedure:

Each interview consists of two to three people and a facilitator. The workshop was organised for a maximum of two hours, including an introductory presentation on the participants’ theme and practical activities. Each session took approximately two hours of discussion and ten different topics to discuss. Throughout the game, the participants were asked to imagine how the experience of the robot might be. Thereby, also the embedded values may change their meaning or importance in the context of delivery.


The workshop sessions have resulted in 3 emerging topics, and we formed three hypothetical what-if questions from those topics. The designer no longer attempts to generate answers but instead aims to formulate great questions. These what-if questions aimed to foster the subsequent speculative design phases. Moreover, all these emerging topics pointed out that the most crucial concern of autonomous delivery robots is to deliver the packages they carry on time and safely. This specific concern is what makes the delivery robots different from humans or other robots. Therefore, during the next design phases, we will use this need as a basis for the unique robot emotions and continue designing these unique emotions accordingly.




Speculative Design Approach for Designing Unique Emotions


Speculative design is a discipline that offers designers to imagine how things could be. Design speculations can act as a catalyst for collectively redefining our relationship to reality. The act of speculating is based on imagination, the ability to imagine other worlds and alternatives, create provocations, raising questions, innovations and explorations.




Our speculative work gave us key insights into the necessary appraisal structures, but still left open which action tendencies might be appropriate for this. But how can we investigate the emphatic meaning (or 'load') of a robot's action in context?


findings/outcomes, providing the relevant details on what came out

As our starting point to answer this question, we took a small remote-controlled robot to the streets and explored movements and behaviours while driving on sidewalks under the researcher’s control as an operator. This approach was much inspired by the dreamer step of the Disney Design method. This enabled us to directly observe how different actions `felt' in context, which was fundamental for finding suitable action tendencies. In addition, the reaction of other pedestrians -- ranging from shock, to worrying about privacy, to kids trying to play with the robot -- further grounded these insights.




While many other action tendencies would be possible, this informed our decision for the action tendencies we chose.




After each walk, I wrote a robot diary, a fictional story from the point of view of the robot to highlight and examine circumstances relevant from the robot's perspective.

It is crucial to consider that navigating as a robot on the streets is different from having a human body. As humans, we have legs, arms, and other crucial organs, benefiting our movements and balance to stand up, walk, run, jump, reach or lean.


Emotions are very individualistic, and their reflections are variable. Emotions are not always reflected in the same way, we can conclude that these unique robot emotions can be modelled variously. For this reason, we began to develop the behaviour of these emotions, using different tools and methods to generate ideas on how these unique emotions we created could be experienced from different angles; those are the third person perspective which represents how people from outside see the emotion, and the first person perspective which represents how the emotion makes the robot feel internally.




Evaluating and Reflecting on Sketched Robot Emotions


I was interested to shine light on the the questions of:


1. Can people differentiate these robot emotions from other similar human emotions?


2. Do people pick up on the qualities that together constitute the designed robot emotions?




First, the participants are invited to watch a video of approximately 30 seconds. Afterwards, for qualitative feedback, an open-ended question was presented to get the individual opinions of the participants. This question was intended to let the participants describe what they saw in the video and find out what specific behaviours of the robot caught their attention.


After that, the participant, who described the robot’s behaviour in their own opinion, was asked whether they thought that the robot had experienced emotion in this video they watched. As a final step, they were asked which emotion they thought was covered in this video, out of three pre-prepared definitions of three different emotions.


For this last question, the feeling of Loniformi was compared to the feeling of Boredom and Tiredness. The feeling of Puffalope was compared with the feeling of Calmness and Satisfaction. The reason for preferring these emotions is that they are in the same arousal and valence classification, and they are similar to each other in many ways (Figure 7.1).


For the options to the last question, descriptions of each emotion were given instead of their names. In this way, the participant would not know which emotion they chose, and they only differentiate the emotions from their definition of the action tendency. Interviews were conducted online using Google Survey. Each session took approximately half an hour.



Participants


40 participants took part in the study, provided they were 18 years of age or older. For the survey, participants` differentiated educational and cultural background was a critical aspect for inviting them for the final validation study. The selected participants could articulate, verbally or in certain more creative forms, the experience as it is lived and feel comfortable sharing personal and sometimes even intimate information about their lives and inner worlds. Before the session, participants were asked to fill in a consent form.


Findings Loniformi

In one of the third perspective videos, pedestrians walk leisurely and unsteadily just ahead of the robot. However, they are not in direct contact with the robot. 52.5% of the survey participants determined that the robot experienced emotion in the first video, and 87.5% reported it as Loniformi.


In the other video, pedestrians interact with the robot and prevent the robot from fulfilling its mission. 70% of the participants claimed that the robot would feel an emotion in this situation, and 75% estimated that this emotion would be Loniformi.


The outcome I seized from this data is that the participants clarified that in a situation with unnecessary and direct human-robot interaction, the probability of the robot experiencing emotion is higher than an indirect human-robot interaction.


One scenario from a first-person perspective was performed while the robot is experiencing Loniformi. A group of crowded people who walk on the sidewalk trigger the robot to feel Loniformi.


The respondents mentioned that the density of the people stressed the robot. Besides, they stated in their sentences that the robot’s experiences in these videos were distress, fear and anxiety.


Further, in the video, pedestrians walk around the robot, some fast, some of them slower. The video included sound effects in order to raise the intensity of the emotion. However, these people are not in direct contact with the robot. 60% of the survey participants determined that the robot experienced emotion in the first video, and 87.5% reported it as Loniformi.


The outcome I gained from this data is that the participants clarified that in a situation with a robot passing through a crowded street, the robot might feel an emotion. Precisely a negative emotion because of the fact that this crowd might block the robot from achieving its mission.


Findings Puffalope

In one of the videos, two pedestrians pass by calmly and do not initiate any interactions with the robot. 70 % of the survey participants mentioned that the robot does not experience an emotion in the first video (Figure 7.13), and additionally, 62.5 % of them reported that the robot is experiencing Puffalope. Moreover, %30 of the responders reported that the robot is experiencing Calm.


In the other video, the robot delivers a package to the end customer. After the delivery, the customer shows his satisfaction with the service. The video represents a positive human-robot interaction. 55 % of the participants claimed that the robot would not feel an emotion in this situation, and 55% estimated that the robot is experiencing Puffalope. Furthermore, %40 of the participants thought the robot was experiencing Satisfaction.


From the survey results, it can be concluded that people believe the robot does not experience emotion when the robot can ideally obtain its task. On the one hand, feeling Calm and Satisfaction have been voted very closely to Puffalope. That means Puffalope can be confused with Calm and Satisfaction. Another critical point is that when the robot interacts with a person, a substantial number of people switch their opinion of the emotion, as they think the robot is experiencing an emotion. From this, it can be concluded that human-robot interactions can trigger emotion for the robot, and this situation will be more believable for people’s perception.


In order to answer the second research question, transcripts were analyzed using Reflexive Thematic Analysis, which allows for identifying meaningful patterns in qualitative data. To facilitate the process, Quirkos software was used, which allows for the coding and grouping of the qualitative results into topics.




The resulting clusters of the given statements and responses can be broadly categorised into the five themes for each emotion. The created categories for Loniformi are as follows; “Displeasure”, “Distress”, “Cautious”, “Need for safety and security”, and “Confused”. And categories for Puffalope “Pleasure”, “Carefree”, “Calm”, “Smooth Movement” and “Focused”.



Conclusions


The survey results show that people mostly think the robot is experiencing an emotion if there is an obstacle for the delivery robot to execute its task, or in other words, the robot has a negative experience, a sense of displeasure.


Besides, if there is a negative human-robot interaction, the percentage of people who think the robot is experiencing an emotion has scored the highest.


Two third-perspectives, one first-perspective and one virtual reality video, were shown to participants while the robot expressed Loniformi. For each video, more than 50% of participants choose the robot as experiencing an emotion. From the Loniformi videos, people could easily differentiate Loniformi from the other two emotions: tired and bored. Loniformi has been voted more than 75%, meaning that Loniformi was distinguished from Tired and Bored. For describing the state of Loniformi, participants used words similar to hesitation, frustration, worry, and stress.


On the other hand, one of the leading behaviours Loniformi aims to cause to the robot is seeking safety and security. However, this behaviour was not sharply perceived by the research participants.


Two third-perspective videos and one first-perspective video were shown to participants while the robot expressed Puffalope. More than 55% of the participants chose the robot is not experiencing emotion. Even though a higher percentage of participants voted for Puffalope, a considerable number of people also voted for Calm and Satisfied. So, therefore, a conclusion derived from that was that it was complicated for participants to identify Puffalope from two other resembling emotions: Calm and Satisfied. Respondents of the survey described Puffalope as being calm, carefree, happy and focusing on executing its task.


Additionally, there was another third perspective video in which the robot is not experiencing any emotion. Besides, no human-robot interaction was presented in this video, either an obstacle for the robot to execute its task. In this video, 67.5% of participants thought that the robot is not experiencing an emotion. 52.5% of the participants choose that the robot would feel calm if there is an emotion experienced.






Comentários


  • LinkedIn

Created by Melek Akan. All rights reserved.

bottom of page