UX Case Study: Disney+ Interaction Design


Although I’m not the biggest fan of Disney’s media monopoly for a variety of reasons, I’m a huge Marvel geek. As such, I was excited about Disney+ when it arrived, serving as the home for Marvel Studios streaming. I personally think Disney+ is a great service and provides plenty of value for the customer. However, in dealing with the app itself, I’ve always found the UX to be somewhat lacking. Since it’s launch less than a year ago, I’ve pointed out little pain points here and there in regard to the interaction design.

I’ve been spending the summer as a UX researcher and am essentially specializing in UX research over design. However, I’m also agile, cross-functional, and am skilled at rapid prototyping. Yesterday, while I was aimlessly browsing around streaming platforms looking for something to watch on my Roku, I was reminded of an issue with Disney+, and I think I could offer a very simple solution.


There are 22 rows of content to browse, each with up to 25 individual video content selections. But there is no easy way to navigate back to the top of the screen. The top of the screen is the only place to access and navigate between channels (Disney, Pixar, Marvel, Star Wars, National Geographic), which means that if the user has browsed to the bottom of the main screen, they have to click the up button on their remote 22 times in order to filter content to 1 of the channels.

How might we improve the navigation experience via TV remote for the end-user?


Borrowing from some of the other streaming services I subscribe to, I offer two potential solutions:

  1. The Home icon on the sidebar menu will take users to the top of the screen
  2. The sidebar menu gives users the ability to filter content by channel


I created an interactive prototype with Adobe XD. It may take a moment to load, as I improved the fidelity of the prototype by incorporating 250+ images of content currently featured on the main page.

One of my favorite things about this prototype was all the interactions I was able to incorporate in one frame. Taking advantage of XD’s components and the ability to create a variety of different states for them. Initially, I made the sidebar menu using an overlay, but I went back in and nested the menu components in another component with the background so that I could delete the overlay. This provides a smoother look and feel when the menu expands.

This prototype includes the 10 current (as of 7.26.20) banner images, a static and hover state for each channel, and the first 10 video selections of all 22 rows.

General walk through demonstrating various interactions in the prototype.

In the prototype, you can scroll up and down as though it’s a web page but imagine instead, using a Roku remote and having to click down each time you want to go to the next row. Then imagine having to click up 22 times from the bottom row to get back to channels.

To improve this user flow, the home icon on the sidebar will take the user back to the top of the screen from wherever they are currently at. On the remote, when you click left to open the sidebar, Home is automatically selected. So if a user is just scrolling down rows from the first position, they can easily get back to the top in two clicks.

Demonstration of how I re-imagine the functionality of the home button.

Next, I’ve introduced a means of filtering channels from the sidebar. So that if the user is anywhere on the screen, they can open the sidebar menu and navigate between channels. In the current version of the app there is already a Disney+ icon in the sidebar. Building on this, I simply introduce navigation arrows and give users the ability to select channels from the sidebar.

Demonstration of how I re-imagine sidebar menu channel navigation.


This is an informal case study primarily intended to build the prototype through which to visualize my solutions. I have not recruited participants for usability testing, nor do I intend to. I offer the link to the prototype below. I encourage you to play around with it and leave me a comment telling me what you think.


Disney+ offers a plethora of curated video streaming content. However, the app on smart TV’s presents some issues in regard to interaction design and navigation. I identified an opportunity to improve users ability to quickly navigate back to the top of the screen, and an improved user flow for being able to switch in between channels. With no usability testing to back up my claims, I hypothesize that my solutions will improve the user experience by speeding up the users ability the navigate the main screen as well as select between channels.

HCI, the democratization of space exploration, and bioastronautics

University of Michigan

School of Information

Master’s Thesis Letter of Intent

Project Title

Ethnographic Encounters of the HCI kind in Bioastronautics


Matthew Garvin

Faculty Advisor

Kentaro Toyama



Research Plan

Problem Statement

Bioastronautics is a branch of aerospace engineering that specializes in the study and support of life in space. Bioastronautics researchers are interested in the biological, behavioral, medical, and material domains of organisms in spaceflight. Technological advances have increasingly led to a deepened interest and urgency in the domain of space habitat. The goal of NASA’s Artemis Program is to establish a sustainable lunar colony in order to learn how to establish a sustainable colony on Mars. One of the primary objectives in the design and development of new technology to support life in space is the need to develop software that can support astronaut autonomy. This means that for the first time, astronauts themselves have to be able to use these tools to effectively carry out missions safely, without assistance from Ground Control. 

As humans seek to expand out into the solar system, the tools, technologies, and habitats needed to support life in space have to incorporate good HCI principles. How do bioastronautics researchers conceive of user needs, preferences, comforts when designing interfaces and habitats for future spaceflight and habitation? Most bioastronautics researchers will never experience the environment they are designing for, and according to the 2013 evidence report titled, “Risk of Inadequate HCI” issued by NASA, “HCI has rarely been studied in operational spaceflight, and detailed performance data that would support evaluation of HCI have not been collected.” (Holden, Ph.D., Ezer, Ph.D., & Vos, Ph.D., 2013). The report goes on to note the additional concern that potential or real issues related to HCI in past missions have been covered up by virtue of constant contact with Ground Control (Holden, et al., 2013).

Literature Review


Because of the inability for life as we know it to exist on its own in space, everything used to put humans in spaceflight and habitation is a concern of bioastronautics. Due to the relatively short distance and duration of missions to date, researchers and engineers in bioastronautics have primarily been concerned with human factors associated with hardware and industrial design to ensure these designs were considerate of human physiological capabilities.  As technology advances and we push the boundaries of what is possible, a shift in focus to issues related to human-computer interaction is an increasing necessity. While previous space shuttles were typified by hard switches and buttons, astronauts using exploration vehicles will be primarily interacting with glass-based interfaces, software displays and controls (Ezer, 2011).

According to Holden et al., (2013), inadequate HCI presents a risk that could lead to a wide range of consequences. While there’s an increase in the amount of information necessary to display, the real estate in which to display such information remains limited. Furthermore, as mission distance and length increase, immediate access to ground support will continue to decrease. Meaning that there won’t be a team of experts on the ground prepared to answer questions, solve challenges, and provide workarounds on the fly. As a result, the design of computing and information systems need to take this into account, providing support and just-in-time training when a mission isn’t going according to plan for the autonomous astronaut. In terms of HCI, this means that interfaces must consider environmental and contextual challenges to ensure that interfaces present low cognitive loading and are usable with pressurized gloves, in microgravity, with persistent vibrations (Holden et al., 2013).


The term bioastronautics first appears in the literature as a 1962 survey published by Cornell Aeronautical Laboratories, which defines the term as the study of life in space, with the author noting that the discipline is so new that there was hardly time to come up with a name (White, 1962). For context, bioastronautics was born during both the Cold War (1947-1991) as well as the Space Race (1955-1975) between the United States and the Soviet Union. The primary intent behind the discipline is today as it was then, to produce systems and technology capable of supporting and sustaining life in microgravity, and to understand the effects of microgravity on the human body. In this regard, much of the research has centered around medical concerns.


“Bioastronautics encompasses biological, behavioral and medical aspects governing humans and other living organisms in a space flight environment; and includes design of payloads, spacecraft habitats, and life support systems. In short, this focus area spans the study and support of life in space” (UC Boulder Aerospace Engineering Sciences, 2020).

Main Body

When space human factors researchers consider mission design and work practices, they are especially considerate of the roles of the various crew members, their physical and mental capabilities and the requirements for life support/space/training (Woolford & Bond, 1999). For twelve days in 2002, computer/cognitive scientist William Clancey led an ethnographic research study as a closed simulation in the Mars Desert Research Station for NASA-Ames Research Center and the Institute for Human and Machine Cognition. The study was a methodological experiment in participant observation and work practice analysis. It gathered qualitative data measuring productivity, a comparison of habitat design, schedules, roles etc, and sought to learn whether or not ethnography could be applied to a closed simulation. Serving as the crew commander, could one also conduct ethnography through participant observation? According to Clancey, one can (Clancey, 2004). In addition to Clancey’s study, there are a number of other simulations for space habitat research such as Stuster’s Bold Endeavors (1996) in a polar environment, The Lunar-Mars Life Support Test Project in a closed chamber, NASA Extreme Environment Mission Operations Project (NEEMO) in an underwater habitat (2004), and BASALT (Biologic Analog Science Associated with Lava Terrains). Analog projects like these are designed to simulate on Earth certain environmental variables to test concepts of operations in regard to hardware, software, and data systems, as well as communication protocols. For these projects, the primary focus is centered around the EVA or extravehicular activity (Beaton, et al., 2019). An EVA astronaut is the one who dons the spacesuit and exits the living quarters to explore, conduct research, or engage in repair tasks. When an astronaut exits the International Space Station to change a battery or make some other upgrade or repair, that’s an EVA.

With Olson (2010), we get a glimpse into the ecologies and human cosmologies of American astronautics. Through her ethnographic fieldwork conducted primarily at NASA’s Johnson Space Center and submitted for her Ph.D. in Medical Anthropology, Olson argues that ecology and cosmology are co-constituting.  Combining participant observation with archival data, Olson is able to evaluate how astronautics practitioners come to know and work with the “human environment”. This work served to highlight how astronautics was connected to a broader array of environmental science and technology (Olson, 2010). What does it mean to be sociopolitical, technoscientific, symbolic and transcendental? With this, Olson is asking what role astronautics has in making ecological knowledge, and how it can inform and make concepts like adaptation and evolution scalable.

In an article published the same year, Olson (2010) argues that in extreme environments such as outer space, “the concept of environment cannot be bracketed out from life processes; as a result, investments of power and knowledge shift from life itself to the sites of interface among living things, technologies, and environments” (Olson, 2010).


While there have been a few attempts to conduct ethnography in mission and environmental simulation, none of these attempts had a focus on human-computer interaction. Similarly, while Olson’s ethnography focused on NASA researchers, the purpose of this work was to inform medical anthropology. Like Olson, I contend that with advancing technology, it becomes more clear how life, technology, and the environment are interrelated. As a result, human-computer interaction is a central facet of successful mission planning and execution for the autonomous astronaut. It is therefore crucial to understand how researchers interested in the bioastronautics of spaceflight and habitation conceive of human-computer interaction, and user needs/preferences/comforts.

Research Design

Embedded in the student research group CLAWS (Collaborative Lab for Advancing Work in Space) as the UX lead and human-in-the-loop testing coordinator, I will conduct participant observation with my teammates and our faculty sponsors. I will conduct ethnographic interviews with certain members who meet my criteria (actually pursuing a degree/career in astronautics/bioastronautics). I will combine this ethnographic research with a more extensive literature review, and analysis of archival data from NASA as well as past CLAWS projects.

For the past two years, CLAWS has competed in the NASA SUITS Challenge which has granted the CLAWS team personal access to a NASA mentor, monthly recorded virtual sessions with NASA astronauts as well as scientists and engineers who met with us to provide useful information for our research. CLAWS works alongside several other student research teams as subgroups of SEDS@UM (Students for the Exploration and Development of Space), which is the University of Michigan chapter of a larger international SEDS organization. This group is primarily composed of Computer Science, Mechanical and Aerospace Engineering undergrad and graduate students who participate in a variety of technical projects and community outreach events. In an effort to bolster the UX in regard to the technical projects, there has been a push to attract more students from the School of Information with a background in UX design and Human-Computer Interaction. I’ve taken advantage of this opportunity and have worked my way up to a leader in the team since joining in early December of 2019.

The purpose of my research is to understand how researchers in this field think about human-computer interaction, user needs/comforts/preferences. How do bioastronautics researchers conceive of designing for humans and testing through simulated environments? I am currently involved in four projects with members from CLAWS. Two of these projects are related to the augmented reality system we invented, ATLAS (Augmented Tools for Lunar Astronauts and Scientists) for NASA SUITS and X-Hab. Two additional challenges involve designing the first 1 million population Mars city-state, and optimizing food production for a nine person crew on Mars, both for SpaceX. Working with my team on these projects and continuing to work with them on similar projects throughout my research, I will gather qualitative data to better understand how to implement human-centered design strategies and evaluative processes into the field of bioastronautics.


Beaton, K., Chappell, S., Abercromby, A., Miller, M., Nawotniak, S. K., Brady, A., . . . Lim, D. (2019). Assessing the Acceptability of Science Operations Concepts and the Level of Mission Enhancement of Capabilities for Human Mars Exploration Extravehicular Activity. Astrobiology, 19(3), 321-346.

Clancey, W. J. (2004). Participant Observation of a Mars Surface Habitat Mission. Moffett Field, CA: NASA-Ames Research Center.

Ezer, N. (2011). Human interaction within the “Glass cockpit”: Human Engineering of Orion display formats. Proceedings from the 18th IAA Human in Space Symposium (#2324). Houston, TX.: International Academy of Astronautics.

Holden, Ph.D., K., Ezer, Ph.D., N., & Vos, Ph.D., G. (2013). Evidence Report: Risk of Inadequate Human-Computer Interaction. Human Research Program: Space Human Factors and Habitability, 1-46.

Olson, V. A. (2010). American Extreme: An Ethnography of Astronautical Visions and Ecologies. Ann Arbor, MI: UMI Dissertation Publishing.

Olson, V. A. (2010). The Ecobiopolitics of Space Biomedicine. Medical Anthropology, 170-193.

UC Boulder Aerospace Engineering Sciences. (2020, 04 13). Bioastronautics. Retrieved from University of Colorado Boulder: https://www.colorado.edu/bioastronautics/

White, W. J. (1961-62). A Survey of Bioastronautics. Buffalo, NY: Cornell Aeronautical Laboratory.

Woolford, B., & Bond, R. (1999). Human factors of crewed spaceflight. In W. Larson, & L. Pranke, Human Spaceflight: Mission Analysis and Design (pp. 133-153). New York: McGraw-Hill.

Need Assessment and Usability | IBM Developer

image of the IBM Developer landing page


IBM Developer is the rebranded platform (previously known as developerWorks) for developers to use and explore various resources, from coding patterns to live-stream tutorials and articles from experts. IBM Developer also offers community-related outlets for developers to meet and engage via live-streaming, blog, podcasts, and newsletters.


The primary goals of this project are to make recommendations to attract and retain more active users on the website and understand the aspects of community that are important to developers. These goals drove our research.

Methodological Overview

Interaction Map

Shortly after meeting with our client, we got to work on generating an interaction map. The purpose was to produce a static representation of the system in which we were beginning to research. For this task, we used Figma and the plugin Arrow Auto. To browse the complete interaction map, click here. We simply collected screenshots focused on cataloging Technology Topics, Community, Account Creation, and IBM Cloud. Then we arranged and connected them with the plugin.

With the interaction map in hand, we now had a reference artifact which made salient all the possible actions and all the respective error and non-error states (e.g. pages, screens) that those actions can lead to. We didn’t need to produce a map of the entire website, but rather a more narrow representation of the primary workflows and navigation avenues in which we were concerned.

image of the interaction map
Semi-Structured Interviews
Target Population

IBM Developer is an expansive resource for developers, both seasoned as well as those interested in entering the field. For our purposes, we specifically targeted professional developers already using or interested in using cloud computing for development purposes.

Recruiting Methods

Our client supplied the team with the names and email addresses of 36 users that had opted in to be contacted to participate in user research for IBM Developer. We sent a screener survey to these individuals to get a brief understanding of their background in development.

After reviewing the screener results, we selected three individuals to participate in our interviews that indicated they were available for up to 1.5 hours at a mutually agreeable time for at least 2 members of our team. Of the candidates that fit this profile, we selected a range a user segmentation.

  • Female, “gray hair”, Java Developer
  • Female, early 20’s, recent college graduate, career in tech and information
  • Male, mid-30’s, self-employed developer in custom software solutions and web development
Tools and Instruments

Each team member led a semi-structured remote interview using a protocol designed to:

  • probe the participant for values, attitudes, goals and behaviors related to software development
  • gather concrete experiences using developer platforms and resources
  • learn how their experiences varied across that spectrum.

Each interviewer was accompanied by another member of the team filling the role of the note-taker. Interviews were conducted virtually via Google Meet, and the interviews were audio recorded using our mobile devices.


The audio recordings were transcribed first by Otter, then manually edited for accuracy. Using virtual sticky notes on Miro, a digital whiteboard, we utilized a hybridization between thematic analysis and affinity wall diagramming by coding significant insights/quotes into clustered themes/sub themes related to values, mental models, goals, behaviors, pain points, and tasks. We also noted a few others themes that emerged such as roles, skill level, preferred tools and demographics.

image of our hybridized thematic affinity diagram

From this data we were able to craft three provisional, or “ad-hoc” personas and design a scenario for each based on the themes that emerged from this analysis.

Findings & Recommendations
  • IBMid is a big pain point
    • Recommend simplifying
  • Participating in a positive developer community is a value
    • Recommend increasing engagement through targeting and courting underrepresented communities
  • Searching for answers can be confusing
    • Recommend research for improved information architecture
Comparative Evaluation

For the purpose of comparison, seven products were selected as representative competitors and sorted into a taxonomical hierarchy of direct, indirect, partial, parallel, and analogous categories. With categories established, our team then compared these products across a number of dimensions, analyzing this information to produce insights in understanding how these products are similar, where they differ, and how this data can be used to refine our initial recommendations, to improve growth and retention on the site.

Through this evaluation, our team was able to investigate in more detail the general landscape in which IBM Developer and its public cloud platform compete. We found that all direct competitors display for new users some variation of “Getting Started” from the homepage with the exception of IBM Developer. While a lot of comparative research exists on public cloud offerings, there was little comparison regarding the actual websites, particularly since the launch of the new website last fall. According to our scoring of common usability issues, IBM Developer stacks up quite nicely with the competition.

Lastly, we compared cross platform engagement within the cloud developer community and uncovered relevant insights as to the importance of a healthy and stimulating developer community.

Online storefront
Cloud hosting
Complex pricing
Cloud hosting
Software deployment
Similar service and market presence
Targeting a different region
Active and engaged
Supportive atmosphere
Google Developer
SAP Developer
RamNodeAlibaba CloudWomen Who Code
table outlining competitors
image of comparative usability score table
Finding & Recommendations
  • The overall usability score average was 12.25, IBM Developer scored 11
    • Recommend investing in innovative information architecture
  • IBM does not have a “Getting Started”
    • Recommend prioritizing “Getting Started” from the landing page
  • A lack of engaged community support
    • Recommend bringing back some community features to re-establish engagement

For this leg of our research, we composed and deployed a survey through Qualtrics seeking an ideal number of 50 respondents, but a minimum of 30 to reach statistical significance.

Target Population

IBM Developer is an expansive resource for developers, both seasoned as well as those interested in entering the field. We targeted professional developers already using or interested in using cloud computing for the storage and deployment of software.


Our client indicated that we should first reach out to the 36 users whom we had contacted previously for user interviews. These are professional developers of varying focus areas and skillsets. IBM granted us two $50 and five $20 gift cards to raffle off as incentive to take the survey. After the first round of respondents, the client provided us with a list of 141 additional emails to recruit from. Due to the low response rate, the client sent out the survey invitation to the same 141. Two days later, our client sent the survey invitation out to an additional 90+ potential respondents.


Tools and Instruments

We used a Google Doc for remote brainstorming and shared with the client in order to solicit feedback regarding survey question design. We then transferred the survey into Qualtrics, a powerful software for survey design and deployment on the web. Each member of the team as well as the client piloted to survey to check for errors and ensure a good question flow for participants.


Through our analysis of survey data, we learned that respondents primarily use IBM Developer to take advantage of learning materials in the Technology Topics section.

While content interactivity isn’t a high priority to many of our respondents, companies like AWS and Adobe have both shown how content interactivity can be leveraged to improve community and user engagement, spurring retention and brand loyalty, where AWS holds interactive coding sessions on Twitch and Adobe has built up an entire community around its Daily Creative Challenges for XD, Photoshop, and Illustrator.

Out of the 5 features we asked respondents to rank in terms of importance, website navigation was ranked the highest. After Technology Topics, respondents said they were most satisfied with navigation; this in contrast to earlier research where navigation was often noted as an area of improvement.

Findings & Recommendations
  • Exploring “Technology Topics” is generally a positive experience
  • Content interactivity can undergo improvements
    • Recommend creating Daily Cloud Developer challenges
  • Website navigation is the most important feature that users look for
    • Recommend a card sorting study to better understand end users mental models
Heuristic Evaluation

For the heuristic evaluation, our goal was to evaluate information architecture through menu structure and to gain insights into how users experience “Technology Topics”. To this end, we utilized Jakob Nielsen’s 10 fundamental usability heuristics.

After conducting our individual evaluations, we came together to align and synthesize violations and severity ratings through which we derived a few key findings:

  • Information architecture presents opportunities for revision
  • Navigation cues are lacking
  • Search functions miss opportunities to pin relevant information
  • Help and support is static and simple

As a result, we make the following recommendations:

  • Revamp header menu navigation
  • Introduce cues to support system status visibility
  • Search results could be both optimized and tailored
  • Site map creation and more dynamic, specific Q&A features

These recommendations are intended to support the overarching goals of the project which is to bring fresh ideas to the new site and improve conversion rates for turning site visitors into paying customers and brand loyalists as well as building a more engaging community.

Usability Testing

We used Calendly to recruit usability test participants who considered themselves to be professional developers and who had a relative familiarity with usability testing. These individuals came from diverse levels of development experience, age, and gender. We then scheduled remote meetings with our 5 participants via Google Meet, which gave us the capability to record the session for analysis and allowed participants to share their screen plus their webcam during the test, so that we could correlate facial expressions with what they were looking at on screen. During each session, we asked participants to complete six tasks, compelling them to explore Cloud, Linux, Blockchain, AI & Watson, and Node.js.

We completed each usability test with a post-test questionnaire to help us develop findings and a survey that the client designed for their parallel testing.

These usability tests yielded the following key findings:

  • Visited links don’t provide feedback that they’ve been visited
  • Internal names are confusing to users
  • Tutorials, series, and courses are great content, well liked by users, but lack a clear learning pathway

And the following recommendations were made in response to our findings:

  • Change the color of visited links to provide feedback to the user as to where they’ve been
  • Simplify naming conventions, supply users with an acronym decoder
  • Implement gamification on learning pathways to improve user retention


There were a number of limitations that impacted our research. The methods we employed and the order we employed them in as well as the time we had through which to conduct the research and analysis were all decided by our instructional team, as this project was the result of a client-based course, “Need Assessment and Usability Evaluation” at University of Michigan School of Information Masters program.

Each one of the methods we employed made up a separate assignment over the course of the semester, and the intention was to culminate in a final video presentation for the client as a wrap-up. For each assignment we reported and presented our findings in greater detail. This case study is a summary overview of four months worth of work.

Next, in the middle of the project, COVID-19 restrictions caused us to have to move the project completely virtual. While the project was intended to be fully remote from the start, campus closures led to our team having to move our collaboration and meetings completely online. Additionally, one of the members of our team contracted COVID-19 following an international trip for Spring Break, which further hindered our ability to formally wrap up the project.

The client, IBM Developer, was very supportive and understanding throughout this ordeal and expressed a desire to let us off the hook so to speak, in terms of a final deliverable.


In conclusion, our recommendations address our key findings.

We produced an interaction map as a static representation of a visual information architecture and interaction patterns. We created 3 provisional personas based on semi-structured user interviews. Following interviews, we conducted a comparative evaluation on multiple tiers of competitors spanning an array of dimensions involving features, interaction patterns, and business models which impact user experience. Next, we deployed a survey to gather data in order to better understand demographics and uncover relevant information in context to IBM Developer’s: “Getting Started” as well as community engagement and customer retention. In an effort to evaluate information architecture and to gain insights into how users experience “Technology Topics” we conducted a heuristic analysis. Finally, we facilitated five usability tests to get feedback and derive insights related to how usable the live site was for real users.