- June 24th
- 9:00 - 11:00 AM
UNESCO’s 2019 report, entitled “I’d Blush If I Could,” claims that voice assistants propagate harmful gender biases, such as reinforcing that women should be in subservient roles, while media coverage and research continue to argue that tech companies need to do better. As a reflection of the brand, the agent’s personality is a critical component of the design of conversational systems. But how do the personalities that we design for our voice assistants propagate biases and how do we avoid doing so? This 5-hour hands on workshop explores the components that make up personality, the role that each component — including gender — plays, and ways to avoid unintended biases. We’ll share our work on Q, the non-binary voice and our research on gender and personality in voice assistants. We’ll then break out into teams to go through design activities and rapid prototyping for a conversational assistant, thinking through the implications of our decisions.
This workshop is beneficial for designers, developers, product managers, and creatives that are responsible for creating conversational assistants or bringing their company’s brand to life.
This workshop is split into two days: the first day is June 22, 9:00 am
9:00 – 9:15 AM Day 2 Welcome, Agenda
9:15 – 9:45 AM (Interactive)
Group discussion / revising persona from part 1:
– What are the potential perceived biases introduced by the personality you identified in breakout 1?
9:45 – 10:15 AM (Lecture 3) Persona, gender, and aural voice
– Intro to non-binary TTS voices
– Audio examples of how voice and persona are linked
– Audio examples of TTS tools and voices with differently gendered voices (male / female / non-binary)
10:15 – 10:45 AM (Interactive) Group discussion:
– What voice would this AI interface have?
– How would the perception of the voice and interaction change if it had different genders, including non-binary?
10:45 – 11:00 AM Final thoughts and wrap up