Tag: UX Research

  • Ethnographic Encounters of the HCI kind in Bioastronautics

    Bioastronautics is a branch of aerospace engineering that specializes in the study and support of life in space. Bioastronautics researchers are interested in the biological, behavioral, medical, and material domains of organisms in spaceflight. Technological advances have increasingly led to a deepened interest and urgency in the domain of space habitat. The goal of NASA’s Artemis Program is to establish a sustainable lunar colony in order to learn how to establish a sustainable colony on Mars. One of the primary objectives in the design and development of new technology to support life in space is the need to develop software that can support astronaut autonomy. This means that for the first time, astronauts themselves have to be able to use these tools to effectively carry out missions safely, without assistance from Ground Control.

    Photo by Adam Miller on Unsplash

    As humans seek to expand out into the solar system, the tools, technologies, and habitats needed to support life in space have to incorporate good HCI principles. How do bioastronautics researchers conceive of user needs, preferences, comforts when designing interfaces and habitats for future spaceflight and habitation? Most bioastronautics researchers will never experience the environment they are designing for, and according to the 2013 evidence report titled, “Risk of Inadequate HCI” issued by NASA, “HCI has rarely been studied in operational spaceflight, and detailed performance data that would support evaluation of HCI have not been collected.” (Holden, Ph.D., Ezer, Ph.D., & Vos, Ph.D., 2013). The report goes on to note the additional concern that potential or real issues related to HCI in past missions have been covered up by virtue of constant contact with Ground Control (Holden, et al., 2013).

    Because of the inability for life as we know it to exist on its own in space, everything used to put humans in spaceflight and habitation is a concern of bioastronautics. Due to the relatively short distance and duration of missions to date, researchers and engineers in bioastronautics have primarily been concerned with human factors associated with hardware and industrial design to ensure these designs were considerate of human physiological capabilities. As technology advances and we push the boundaries of what is possible, a shift in focus to issues related to human-computer interaction is an increasing necessity. While previous space shuttles were typified by hard switches and buttons, astronauts using exploration vehicles will be primarily interacting with glass-based interfaces, software displays and controls (Ezer, 2011).

    According to Holden et al., (2013), inadequate HCI presents a risk that could lead to a wide range of consequences. While there’s an increase in the amount of information necessary to display, the real estate in which to display such information remains limited. Furthermore, as mission distance and length increase, immediate access to ground support will continue to decrease. Meaning that there won’t be a team of experts on the ground prepared to answer questions, solve challenges, and provide workarounds on the fly. As a result, the design of computing and information systems need to take this into account, providing support and just-in-time training when a mission isn’t going according to plan for the autonomous astronaut. In terms of HCI, this means that interfaces must consider environmental and contextual challenges to ensure that interfaces present low cognitive loading and are usable with pressurized gloves, in microgravity, with persistent vibrations (Holden et al., 2013).

    Background

    The term bioastronautics first appears in the literature as a 1962 survey published by Cornell Aeronautical Laboratories, which defines the term as the study of life in space, with the author noting that the discipline is so new that there was hardly time to come up with a name (White, 1962). For context, bioastronautics was born during both the Cold War (1947–1991) as well as the Space Race (1955–1975) between the United States and the Soviet Union. The primary intent behind the discipline is today as it was then, to produce systems and technology capable of supporting and sustaining life in microgravity, and to understand the effects of microgravity on the human body. In this regard, much of the research has centered around medical concerns.

    Definition

    “Bioastronautics encompasses biological, behavioral and medical aspects governing humans and other living organisms in a space flight environment; and includes design of payloads, spacecraft habitats, and life support systems. In short, this focus area spans the study and support of life in space” (UC Boulder Aerospace Engineering Sciences, 2020).

    Main Body

    When space human factors researchers consider mission design and work practices, they are especially considerate of the roles of the various crew members, their physical and mental capabilities and the requirements for life support/space/training (Woolford & Bond, 1999). For twelve days in 2002, computer/cognitive scientist William Clancey led an ethnographic research study as a closed simulation in the Mars Desert Research Station for NASA-Ames Research Center and the Institute for Human and Machine Cognition. The study was a methodological experiment in participant observation and work practice analysis. It gathered qualitative data measuring productivity, a comparison of habitat design, schedules, roles etc, and sought to learn whether or not ethnography could be applied to a closed simulation. Serving as the crew commander, could one also conduct ethnography through participant observation? According to Clancey, one can (Clancey, 2004). In addition to Clancey’s study, there are a number of other simulations for space habitat research such as Stuster’s Bold Endeavors (1996) in a polar environment, The Lunar-Mars Life Support Test Project in a closed chamber, NASA Extreme Environment Mission Operations Project (NEEMO) in an underwater habitat (2004), and BASALT (Biologic Analog Science Associated with Lava Terrains). Analog projects like these are designed to simulate on Earth certain environmental variables to test concepts of operations in regard to hardware, software, and data systems, as well as communication protocols. For these projects, the primary focus is centered around the EVA or extravehicular activity (Beaton, et al., 2019). An EVA astronaut is the one who dons the spacesuit and exits the living quarters to explore, conduct research, or engage in repair tasks. When an astronaut exits the International Space Station to change a battery or make some other upgrade or repair, that’s an EVA.

    With Olson (2010), we get a glimpse into the ecologies and human cosmologies of American astronautics. Through her ethnographic fieldwork conducted primarily at NASA’s Johnson Space Center and submitted for her Ph.D. in Medical Anthropology, Olson argues that ecology and cosmology are co-constituting. Combining participant observation with archival data, Olson is able to evaluate how astronautics practitioners come to know and work with the “human environment”. This work served to highlight how astronautics was connected to a broader array of environmental science and technology (Olson, 2010). What does it mean to be sociopolitical, technoscientific, symbolic and transcendental? With this, Olson is asking what role astronautics has in making ecological knowledge, and how it can inform and make concepts like adaptation and evolution scalable.

    In an article published the same year, Olson (2010) argues that in extreme environments such as outer space, “the concept of environment cannot be bracketed out from life processes; as a result, investments of power and knowledge shift from life itself to the sites of interface among living things, technologies, and environments” (Olson, 2010).

    Gaps

    While there have been a few attempts to conduct ethnography in mission and environmental simulation, none of these attempts had a focus on human-computer interaction. Similarly, while Olson’s ethnography focused on NASA researchers, the purpose of this work was to inform medical anthropology. Like Olson, I contend that with advancing technology, it becomes more clear how life, technology, and the environment are interrelated. As a result, human-computer interaction is a central facet of successful mission planning and execution for the autonomous astronaut. It is, therefore, crucial to understand how researchers interested in the bioastronautics of spaceflight and habitation conceive of human-computer interaction, and user needs/preferences/comforts.

    Bibliography

    Beaton, K., Chappell, S., Abercromby, A., Miller, M., Nawotniak, S. K., Brady, A., . . . Lim, D. (2019). Assessing the Acceptability of Science Operations Concepts and the Level of Mission Enhancement of Capabilities for Human Mars Exploration Extravehicular Activity. Astrobiology, 19(3), 321–346.

    Clancey, W. J. (2004). Participant Observation of a Mars Surface Habitat Mission. Moffett Field, CA: NASA-Ames Research Center.

    Ezer, N. (2011). Human interaction within the “Glass cockpit”: Human Engineering of Orion display formats. Proceedings from the 18th IAA Human in Space Symposium (#2324). Houston, TX.: International Academy of Astronautics.

    Holden, Ph.D., K., Ezer, Ph.D., N., & Vos, Ph.D., G. (2013). Evidence Report: Risk of Inadequate Human-Computer Interaction. Human Research Program: Space Human Factors and Habitability, 1–46.

    Olson, V. A. (2010). American Extreme: An Ethnography of Astronautical Visions and Ecologies. Ann Arbor, MI: UMI Dissertation Publishing.

    Olson, V. A. (2010). The Ecobiopolitics of Space Biomedicine. Medical Anthropology, 170–193.

    UC Boulder Aerospace Engineering Sciences. (2020, 04 13). Bioastronautics. Retrieved from University of Colorado Boulder: https://www.colorado.edu/bioastronautics/

    White, W. J. (1961–62). A Survey of Bioastronautics. Buffalo, NY: Cornell Aeronautical Laboratory.

    Woolford, B., & Bond, R. (1999). Human factors of crewed spaceflight. In W. Larson, & L. Pranke, Human Spaceflight: Mission Analysis and Design (pp. 133–153). New York: McGraw-Hill.

  • Use heuristic evaluations prior to usability testing to improve ROI

    Catch low-hanging fruit with heuristics so that users can reveal deeper insights in usability tests

    Photo by Scott Graham on Unsplash

    ser experience research tends to break down into two broad categories, field studies and usability testing. Or, we might refer to this as need assessment and usability evaluation. Either way, heuristic evaluations will fall under the umbrella of usability methods. This method was invented by Nielsen and Molich (1990) and popularized as a means of discount usability evaluation, aimed at software startups that didn’t have the budget for real user research. Today, user research is more common, and usability testing is the gold standard. If you want to maximize your return on investment (ROI) for usability testing, you’ll want to perform a heuristic evaluation first. This article will explain what a heuristic evaluation is, how to do one, the pros and cons of this method, and why you should do it in lieu of usability testing to maximize the return on investment for both.

    In Nielsen’s own words:

    Jakob Nielsen looking thoughtfully into the camera
    Jakob Nielsen

    “Heuristic Evaluation is a usability engineering method for finding the usability problems in a user interface design so that they can be attended to as part of an iterative design process. Heuristic evaluation involves having a small set of evaluators examine the interface and judge its compliance with recognized usability principles (the “heuristics”). ~ Jakob Nielsen,

    Defining ‘heuristic’

    With that, let us simply define a heuristic as a usability principle or “rule of thumb”. Although when we refer to heuristics in terms of UX (rather than AI) we are talking about usability, a designer could theoretically employ the same process to judge a product’s compliance with the design system.

    As an example, let us say you have an app that was designed without a system in place. Now your company is using a system based on Material Design. You go to the Material website and create a list of their guidelines with which to judge your UI’s compliance. Those guidelines can serve as your “heuristics”, at least in terms of the design.

    Remember, the heuristics we are talking about in this article are for usability engineering.

    Nielsen developed his heuristics in the early ’90s, distilling a list of nearly 300 known usability issues down to 10 overarching principles. And although they are still widely used today, many user researchers are beginning to develop their own heuristics that are more focused on modern technology and the issues related to it. We didn’t have the powerful mobile and smart technology back then that we take for granted today. The computing technology we did have wasn’t widespread and generalized enough for software companies to care about accessibility issues.

    Nowadays, we have a variety of heuristic sets to choose from. For information on some of the more popular sets, refer to Norbi Gaal’s article, “Heuristic Analysis in the design process”.

    In addition to the sets referenced by Norbi, there are a few other specialized sets worth noting here:

    Developing heuristics

    While developing your own heuristics may be encouraged, care must be taken when selecting appropriate principles. This is where prior user research can inform what heuristics are selected. What are their needs, preferences, pain points that you are trying to support and provide solutions to? Furthermore, and perhaps most importantly, you will want to pilot your heuristics in the same fashion as you would pilot your interviews, surveys, and usability tests.

    Quiñones et al. (2017), describes a methodology for developing heuristics. This is an eight-step process through which researchers will:

    1. Explore: Perform a literature review.
    2. Experiment: Analyze data from different experiments to collect additional information.
    3. Describe: Select and prioritize the most important topics revealed from 1–2.
    4. Correlate: Match the features of the specific domain with the usability/UX attributes and existing heuristics.
    5. Select: Keep, adapt, create, and eliminate heuristics obtained from 1–4.
    6. Specify: Formally specify the new set of heuristics.
    7. Validate: Validate the heuristics through experimentation in terms of effectiveness and efficiency in evaluating the specific application.
    8. Refine: Refine and improve the new heuristics based on feedback from 7.

    As you can imagine, this process isn’t a quick and dirty means of getting feedback, rather it’s an entire project in itself.

    The Evaluation Process

    A heuristic evaluation is what is referred to as an expert review. As with other expert reviews, a heuristic evaluation is intended to be a quick and dirty method to uncover issues cheaper than usability testing in terms of both time and money. If you’re not going through the process of developing a new set of heuristics as outlined above, the entire HE process should only take about a week, with the actual evaluation taking no more than a day or two. Instead of recruiting users to put your design in front of, you recruit 3–5 evaluators to review your design according to the chosen heuristics.

    The heuristic evaluation process
    • Familiarize — If you have multiple evaluators (as you should!) then you are going to want them to devote some time familiarizing themselves with the heuristics you plan to use to conduct the evaluation. This is particularly crucial if you are also expecting them to validate a new set of heuristics.
    • Evaluate — There are a few parts to this stage.
    1. First, and let’s be clear: Your evaluators do not have intimate knowledge of your product. You should not be recruiting people who make design/implementation decisions on this product.
    2. The evaluators got familiar with the heuristics, now let them familiarize themselves with the product. They should spend an hour or two navigating, clicking/tapping buttons, and understanding the basic patterns and flows the user experiences.
    3. Heuristic evaluations are typically conducted in two passes. Each pass should be anywhere from 1–3 hours. In the first pass, evaluators holistically interact with the product and note any heuristic violations. In the second pass, evaluators do it all over again. They also retrace their steps and consider if any violations from the first pass are false alarms.
    • Rate Severity — This step doesn’t have to be done on its own. Often evaluators will rate the severity at the same time they are noting the violation. They may go back on the second pass and change the severity ratings of previously noted violations. A standard rating scale comes from Jakob Nielsen, and looks like:
    0: I don’t agree that this is a usability problem at all
    1: Cosmetic problem — quick fix or ignore unless there’s time
    2: Minor usability problem — low priority
    3: Major usability problem — high priority
    4: Usability catastrophe — must be fixed before release
    • Synthesize and Prioritize Findings — At this stage, the evaluation is complete, and the analysis can begin. The evaluators come together and discuss their findings. Evaluators will create an aggregate list of all noted violations, discuss and identify potential false alarms, and agree upon severity scoring. If they are validating new heuristics, this is also the point at which they will be doing so.
    • Converge on Design Recommendations — Based on a review of the prioritized findings, the evaluators will then brainstorm and converge on recommendations to solve the usability issues uncovered in the heuristic evaluation.

    Why 3–5 evaluators

    Depending on your particular circumstances and the given experience of the evaluators you have at your disposal, it may be possible to produce significant findings from a single evaluator. However, there are a few reasons for having multiple evaluators. Nielsen found through his own research on the method that single evaluators will only uncover about 35% of the issues present in the system (Nielsen, 1994). Furthermore, different evaluators tend to find different problems. From the curve shown below, Nielsen demonstrates that the optimal number of evaluators is 3–5. While you may uncover some additional issues by adding more than 5 evaluators, depending on how critical and complex the system to be evaluated is, there is a greater likelihood of overlapping issues found with that of other evaluators. In other words, there are diminishing returns in a cost-benefit analysis as shown below.

    Source: Nielsen (1994) Curve showing the proportion of usability problems in an interface found by heuristic evaluation using various numbers of evaluators. The curve represents the average of six case studies of heuristic evaluation.
    Source: Nielsen (1994) Curve showing how many times the benefits are greater than the costs for heuristic evaluation of a sample project using the assumptions discussed in the text. The optimal number of evaluators in this example is four, with benefits that are 62 times greater than the costs.

    Pros and cons

    As with any method, there are of course advantages and disadvantages. This list is derived from the literature found over at the Interaction Design Foundation (IDF): What is Heuristic Evaluation?

    Pros:

    • Evaluators can focus on specific issues.
    • Evaluators can pinpoint issues early on and determine the impact on overall UX.
    • You can get feedback without the ethical and practical dimensions and subsequent costs associated with usability testing.
    • You can combine it with usability testing.
    • With the appropriate heuristics, evaluators can flag specific issues and help determine optimal solutions.

    Cons:

    • Depending on the evaluator, false alarms (noted issues that aren’t really problems) can diminish the value of the evaluation (Use multiple evaluators!).
    • Standard heuristics may not be appropriate for your system/product — validating new heuristics can be expensive.
    • It can be difficult/expensive to find evaluators who are experts in usability and your system’s domain.
    • The need for multiple evaluators may make it easier and cheaper to stick with usability testing.
    • It’s ultimately a subjective exercise: findings can be biased to the evaluator and lack proof, recommendations may not be actionable.

    Note the pro: “You can combine it with usability testing”. When you’re conducting a usability test, your prototype is your hypothesis. If you implement a heuristic evaluation correctly, you can catch and fix low-hanging fruit in terms of usability issues, thereby refining your hypothesis before you take it to users. Fixing these before testing allows your participants to identify usability issues from the first-person perspective of the persona, rather than recruiting users to find the kinds of issues that you should have caught yourself.

    But let’s not forget to take note of the cons. False alarms as a result of issues found by an evaluator can be problematic and diminish the overarching results of the evaluation. This is yet another reason why multiple evaluators are crucial to making your heuristic evaluation worthwhile. False alarms can often be identified and disregarded when evaluators come together to synthesize and prioritize findings.

    Conclusion

    Heuristic evaluations are a mainstay of usability engineering and user experience research. Though considered a ‘discount’ method, there are a lot of upfront considerations in order to make the most of them. Using heuristic evaluations as a precursor to usability testing can help improve the return on investment for both, as every issue uncovered and solved with heuristics will allow your users to note other issues from their perspective. In sum, you are not your user, neither are your evaluators. Using heuristic evaluations in conjunction with usability testing will iron out a lot of the kinks before you show it to the user. With these issues already solved for, feedback from usability testing can generate deeper insights to really dial in the design, improving the ROI from both the heuristic evaluation and the usability test.

    Sources

    Bertini, E., Catarci, T., Dix, A., Gabrielli, S., Kimani, S., & Santucci, G. (2009). Appropriating Heuristic Evaluation for Mobile Computing. International Journal of Mobile Human Computer Interaction, 20–41.

    Gaal, N. (2017, 06 19). Heuristic Analysis in the design process. Retrieved from UX Collective: https://uxdesign.cc/heuristic-analysis-in-the-design-process-usability-inspection-methods-d200768eb38d

    Nielsen, J. (1994, 1 1). Guerrilla HCI: Using Discount Usability Engineering to Penetrate the Intimidation Barrier. Retrieved from NN/g Nielsen Norman Group: https://www.nngroup.com/articles/guerrilla-hci/

    Nielsen, J., and Molich, R. (1990). Heuristic evaluation of user interfaces, Proc. ACM CHI’90 Conf. (Seattle, WA, 1–5 April), 249–256.

    Nielsen, J. (1994, 11 1). How to Conduct a Heuristic Evaluation. Retrieved from NN/g Nielsen Norman Group: https://www.nngroup.com/articles/how-to-conduct-a-heuristic-evaluation/

    Quiñones, D., Rusu, C., & Rusu, V. (2018). A methodology to develop usability/user experience heuristics. Computer Standards & Interfaces, 109–129.

    Soedgaard, M. (2020, 07 19). What is Heuristic Evaluation? Retrieved from Interaction Design Foundation: https://www.interaction-design.org/literature/topics/heuristic-evaluation

    The UX Collective donates US$1 for each article published in our platform. This story contributed to UX Para Minas Pretas (UX For Black Women), a Brazilian organization focused on promoting equity of Black women in the tech industry through initiatives of action, empowerment, and knowledge sharing. Silence against systemic racism is not an option. Build the design community you believe in.