The BLiSS team at the University of Michigan responded to the X-HAB request for proposal (RFP) solicited by the NASA Stennis team. NASA Stennis tasked BLiSS to design a modern, efficient, and intuitive voice user interface (VUI) for crew interactions on the Lunar Gateway. This project aims to advance the knowledge and technology needed to successfully design voice interfaces for life support systems in space.
Fall ’20 semester
Bioastronautics and Life Support Systems Team UMengineering
- HCI lead
- Conversation Designer
of it all
My role involved briefing the team on human-centered design principles, introducing and leading them through design thinking activities, and maintaining an aggressive, agile pace.
I introduced the conversation design process, Artemis generation astronaut provisional personas. I led my team through designing and developing the system persona, sample dialogue flow, and Wizard of Oz testing.
Who we met
What they do
Autonomous Systems Research
As we came to understand from the researchers at the Stennis Autonomous Systems Lab, our goal wasn’t to perfect the interface but rather to develop an experience strategy to support trust-building and usability for further development of NPAS (NASA Platform for Autonomous Systems).
Evangelizing UX to space engineers
The Human-centered design process is crucial to the goal of NASA’s Space Human Factors team, which is to make space flight and future exploration safe and productive. NASA’s HCD process follows ISO standards and involves 3 primary phases of activities (Holden, Ph.D., K., Ezer, Ph.D., N., & Vos, Ph.D., G. (2013). Evidence Report: Risk of Inadequate Human-Computer Interaction. Houston, TX: NASA. ISO 9241-210:2019)
- Understanding the user and their domain
- Visualizing the design
- Evaluating the design
This process supports nonlinear agile development, allowing for iterative, data-driven design through testing and evaluation. Personas refer to both user and the system. Anthropomorphic traits support adoption and trust (Dasgupta, R. (2018). Voice User Interface Design: Moving from GUI to Mixed Modal Interaction. Apress).
Adapting a human-centered design process to our project, we first wanted to consider whether or not a voice interaction is appropriate for the task. Some tasks are ideal for voice interaction, others require a combination of voice and visuals, and others might be better left to solely graphical interactions.
Where conversation design is appropriate, our process looked like the flowchart above. We began identifying users and use cases. We’re focused on astronauts, specifically Artemis astronauts, and their historical and technological frame of reference. We also considered commercial astronauts and astronauts/ cosmonauts from International Space Agencies.
An interesting difference between designing for conversation vs. visual display is that we want to create a user and system persona.
Mapping anthropomorphic traits onto a voice assistant supports adoption and trust in ways that aren’t necessary for typical GUI design. This is why Alexa, Siri, and Cortana all have names, and while Google Assistant is notable for not having a name, it still has an anthropomorphic personality.
From there, we began storyboarding “happy paths” for users to interact with the system and achieve their goals. Then we drafted sample dialogues and started internal testing to identify glaring usability issues.
Human information processing
Conversation design should feel reflexive and free up attention resources to support multitasking. In such cases, that interaction demands the user’s attention, and a GUI may be more appropriate.
The goal of designing information systems technology is to mitigate sensory stimuli and feedback to extend the user’s capabilities.
The primary thing we wanted to consider when deciding whether or not a voice interaction is appropriate is human information processing. And in particular, what we wanted our tool to do, is support the user’s ability to multitask. So to visualize it through this model, we wanted to mitigate sensory stimuli and free up these areas of central processing. Freeing up attentional resources to improve thought and decision-making, and if we do it right, we should be able to artificially expand the working memory’s channel capacity.
Working Memory Channel Capacity
What causes cognitive load?
- Too many choices
- Too much thought required
- Lack of clarity
Methods for reducing cognitive load
- Avoid unnecessary interactions
- Leverage common design patterns
- Minimize choices
- Use acronyms
There is some debate on what the channel capacity is for working memory. Miller’s commonly referenced model is 7+/-2. However, Broadbent revised this model to 4+/-1, and Broadbent’s model more appropriately accounts for information chunking, a concept we leveraged in our design. Some examples of cognitive loading include too many choices, too much thought required, and a lack of clarity. Other methods we employed to reduce this cognitive loading are reducing or avoiding unnecessary interactions, leveraging related research on common conversation design patterns, and minimizing choices.
In addition, we also sought to take advantage of the bevy of acronyms that NASA already employs in operations and mission design, as acronyms are a great way to convey more complex information in a singular, meaningful chunk (Wickens, C. D., Hollands, J. G., Banbury, S., & Parasuraman, R. (2016). Engineering Psychology and Human Performance. New York: Routledge).
Conversation Style Guidance
- Focus on the user.
For any given interaction, our intent should be to make the user and their task the center of attention.
- Keep it short and sweet.
Astronauts are humans too. Commercial astronauts will not have the same background and training as NASA astronauts, and International astronauts/cosmonauts will not have the same English proficiency.
- Leading with benefits.
In cases where the VUI provides instruction for task completion, follow this formula:
“To get what you want, do this.”
- Avoid UI-specific directions.
For display prompts, reference actions like “Pick” or “Search” rather than “Tap” or “Click.” This will help signify that an action can be spoken.
- Shorter responses are better.
Interactions with the VUI should be informative and concise. Where more detail and technical information are required, VUI + GUI should be used as a multimodal experience to support a reduction in the cognitive loading of the user.
- The computer does the work, and the person does the thinking.
- Use contractions.
This will make the voice assistant sound more human and less computer.
The right user persona
Provisional personas were developed from profiles of Artemis astronauts synthesized with prior research described in the case study: Building an AR interface experience for the Artemis Mission.
Happy Path Scenario
Winnie gets a haul of lunar samples back from the EVA. She opens up the first box and picks up a sample.
As she’s inspecting the moon rock, she notices a glint of light indicating high-Ti basalts. Suddenly she remembers she had left her notebook out of reach.
With her hands full, she’s about to call out for one of the other crew members to ask if they can bring it to her. Then she remembers that she can simply dictate the notes to the voice assistant, which will automatically add metadata to the log.
Now Winnie can get absorbed in her analysis and forget about incidental administrative issues.
System persona: personality selection
Even more important than a good user persona, this project demanded the right system persona.
I challenged my team to research personality traits that NASA looks for in astronauts and personality traits of good team players, then brainstorm traits that we would like to see in the system. We took some 65 traits and deployed a survey (n=15) to vote on crucial system personality traits. Our goal was to develop a personality ideal for representing a veteran astronaut who could behave like a crew member.
With key traits (knowledgeable, warm, friendly, calm, concise, funny) in hand, I then challenged my team to use them and design a character to audition for the role of the system persona. We reviewed each candidate’s personas as a team and determined one was the best. Meet Diego.
- Name: Diego Sanchez
- Age: 38
- Occupation: Directorate Chief Scientist
- Hometown: Santander, Spain
- Education: BS in Physics; MS in Applied Physics, Systems Engineering; PhD Planetary Science
- Hobbies: Skiing, surfing, astronomy, volunteering
Diego is from the bustling city of Santander, which is right on Spain’s north coast. He grew up to love the water and developed a strong passion for astronomy during his star-gazing nights with his family. Diego’s family-owned farms and specialty coffee shops were where he learned how to interact well with customers and the value of hard work. After high school, he attended the University of Bordeaux in France, where he met a few colleagues planning to work with NASA post-graduation and offered to set him up with one of the hiring managers. After graduating with his second MS degree, Diego returned home for a year to help with the family business and then set out to Pasadena, CA where he accepted his first position as a Research Space Scientist. Since then, he has worked hard to achieve his current position of Directorate Chief Scientist for NASA JPL. In Diego’s free time, he enjoys vacations with his family, teaching his youngest daughter French, and surfing on the LA coast.
During one of our meetings, I described that in UX, we often reframe problems or ideas for the design space as “how might we” questions and solicited many responses on a spreadsheet. This approach allowed the team to document ideas as they came and add them later. The highlighted lines indicate the particular use cases we converged on for further development, moving the rest to the backlog for future work.
After settling on some use cases to develop further, my team and I got to work writing sample dialogues. At first, we simply used a list format in Google Docs for team review.
After we fixed any general errors that didn’t comply with our style guide, I began to wire the dialogues up using Voiceflow. This application allowed us to conduct remote Wizard of Oz (usability) testing and make adjustments prior to committing anything to code.
At this point, I moved to an advisory role as I had taken on an internship at NASA’s Johnson Space Center as a UI Architect. While we did not formally test the interface, the strategy and process that I outline above are serving as the baseline approach for the development of the actual system at Stennis Autonomous Systems Lab. Please click below to view the full report, where my contributions can be found on pages 24 – 28 and 44 – 49.
Matt’s perspective on how he looks at implementation is just so refreshing. He knows his material, comes well-prepared, and gives great presentations. The work he and his team did for us will serve as an important baseline for developing voice interactions for NPAS (NASA Platform for Autonomous Systems) going forward.
Dr. Lauren Underwood & Dr. Fernando Figueroa | Project Manager & Discipline Lead (respectively) at NASA Stennis Space Center Autonomous Systems Lab | November 23, 2020
Leave a Reply