Welcome to WordPress. This is your first post. Edit or delete it, then start writing!
Blog
-
AR Tools for Lunar Sampling

UI design Overview
The AR Toolkit for Lunar Astronauts and Scientists (ATLAS) is an information system that was conceived during my time working in a software engineering lab at the University of Michigan.
I led the UX Research, rapid prototyping, and human-in-the-loop testing for an augmented reality information system designed for use in the Artemis generation xEMU spacesuit during lunar EVAs, as well as VEGA (Voice Entity for Guiding Astronauts) a Rasa-based conversational AI.
ATLAS won the 2020 NASA SUITS Challenge, a $10k EPIC MegaGrant, and became the foundational system upon which the CLAWS lab continues to make leading edge technological advances to enable long duration human spaceflight.
A video walkthrough of the AR Design Guide I prepared for the other UX designers and software engineers at CLAWS. This work was also utilized in the NASA Exploration Habitat (X-Hab) Challenge in collaboration with the Bioastronautics and Life Support Systems Lab at the University of Michigan.
The goal of ATLAS and VEGA are to assist astronauts in cognitively demanding fieldwork. Because of the nature of the work, I guided the UX toward emphasizing non-intrusiveness, adaptivity, and situational awareness.
This project led to:
Further involvement with BLiSS, leading human-centered design to adapt VEGA for NPAS, the NASA Platform for Automated Systems, in collaboration with NASA’s Autonomous Systems Laboratory.
An internship with NASA’s Exploration Medical Capability team where I worked across internal systems as a Human Factors Engineer and UI Architect to advance medical systems for long duration human spaceflight.
My thesis research, which synthesized these experiences to better understand the perception of human-centered design among tech-centered engineers designing systems for human spaceflight and the implications for designing the Future of Work on Earth and in Space.
Finally, my journey at NASA concluded with a tour of the aerospace side on the Convergent Aeronautics Solutions team to deliver human-centered design evangelism in support of advanced urban air mobility (eVTOL, drones, etc.) over the summer prior to entering the PhD program at the University of Michigan School of Information.

During COVID, I created a test environment in WebVR. This led to my aFrame contributor credit on GitHub! Publications & Outputs
-
Culture in the Loop: Machine, Justice, and Myth in Speculative Futures
AI is not simply a product of culture. Rather, it is an active participant in cultural creation and change. What follows is a speculative inquiry to explore how intelligent machines co-create, transform, and even inherit culture. Contained in this article are imaginative futures where justice, material lineage, and myth is centered in this process. Five interwoven themes guide our journey: machine-mediated cultural evolution, material and labor infrastructure, generative justice and value flows, myth, ritual, and algorithm, and speculative counter-designs. Throughout, I draw on insights from human-computer interaction (HCI), science and technology studies (STS), media theory, critical race studies, and speculative design as a provocation to envision equitable and meaningful human-AI futures.
Machine-Mediated Cultural Evolution
Intelligent machines are increasingly entangled in the evolutionary processes of culture. In classic cultural evolution theory, variation, transmission, and selection are key forces that drive which ideas and practices emerge and endure (Brinkmann et al., 2023). Today’s AI systems – from generative models to recommendation algorithms – amplify and transform each of these forces. Variation is supercharged by generative AI that produces endless novel images, texts, and music, introducing new cultural “mutations” at an unprecedented rate. Transmission is mediated by algorithms: what we see, hear, and share on digital platforms is filtered and personalized by recommender systems that alter traditional patterns of social learning. And selection of culture is increasingly driven by machine criteria (click-through rates, trending metrics) rather than solely human choice. As one group of researchers puts it, we are witnessing the rise of “machine culture” – culture mediated or even generated by machines, with recommender algorithms and chatbots serving as new cultural agents (e.g. as taste-makers or imitators of human conversation). Brinkmann et al. (2023) define “machine culture” as the cultural evolution that emerges within and through algorithmic systems themselves, encompassing the ways intelligent machines participate in variation, transmission, and selection of cultural traits. Rather than simply reflecting human culture, machine culture is constituted through recursive interactions between human and artificial agents across digital ecosystems. Figure 1 illustrates these three deities of AI.

Figure 1 The three deities of AI. Are we seeing the birth of a new cultural species in AI, or simply an accelerated evolution of human culture? Optimists might argue that AI is a creative partner, expanding the palette of human expression. For example, generative art tools co-create with artists, and strategy AIs like AlphaGo have introduced unheard-of moves in Go, which human players then incorporate into their play. Pessimists might note that AI often just remix human data, reflecting an “accelerated” (and sometimes distorted) version of our own traits rather than truly autonomous culture.
Either way, culture in the age of AI evolves on new tempos. Memes rise and fall in days; niche art styles spread globally via algorithms; micro-genres of music proliferate from algorithmic recommendations. AI is proving to be more than a mere product of culture – it is now a driver of cultural evolution, generating new variations and influencing what cultures converge upon. In sum, the feedback loop of human and machine has tightened: we are co-evolving. This demands that we study and steer these cultural dynamics with care, lest we inadvertently favor a narrow, homogenized “algorithmic culture” at the expense of human cultural diversity.

“Memes rise and fall in days; niche art styles spread globally via algorithms; micro-genres of music proliferate from algorithmic recommendations. AI is proving to be more than a mere product of culture – it is a driver of cultural evolution” Brinkmann et al. (2023), building on Henrich’s foundational work in cultural evolution, extend the theory to include intelligent machines as participants in evolutionary dynamics once thought to be uniquely human. Whereas Henrich (2015) emphasized how evolved psychological adaptations such as imitation, teaching, and prestige bias enable cumulative cultural transmission among humans, the newer framework considers how algorithmic agents mediate and even generate cultural variation, transmission, and selection. This move does not discard human cognition as central, but rather acknowledges that AI systems now shape the conditions under which cultural evolution unfolds.
Material and Labor Infrastructure
Behind the seemingly magical output of AI lies a sprawling, networked ecology of extraction, what is referred to as a ‘fractal supply chain.’ A simple voice command to a smart assistant draws on cobalt mined by workers in the Katanga region of the Democratic Republic of Congo, lithium from South American salt flats, and tantalum from security-threat zones in Rwanda. Those minerals enable the chips in data center servers—often housed in regions where coal or natural gas still fuel electricity grids (e.g., Northern China, rural Pennsylvania). Meanwhile, images and audio tokens are labeled by gig-economy workers earning fractions of a dollar in the Philippines or India.
In Anatomy of an AI System, Crawford & Joler (2018) laid out this global network as a concentric web: at the base, mining camps where children are sometimes coerced into labor; one layer up, contract factory workers assembling circuit boards in Shenzhen; higher still, rural data-center communities breathing fossil-fuel pollution; and at the apex, tech executives and venture capital firms reaping immense profits (anatomyof.ai). What appears as seamless “AI” is in fact an assemblage of many fractured communities, both human and ecological, entangled in extraction, transport, and digital labor. By tracing these nodes, we make visible the human and environmental toll often obscured by polished interfaces.

Anatomy of an AI system: https://anatomyof.ai/ The labor that powers “machine intelligence” is frequently precarious and hidden. Far from autonomous, AI depends on millions of human workers around the world who label images, transcribe audio, and filter content to “teach” algorithms (Williams, Miceli & Gebru, 2022). These workers – sometimes called ghost workers or turkers – often earn pennies for tasks that can be psychologically draining (e.g. viewing disturbing images to tag them). One article notes that they are recruited largely from impoverished communities and paid as little as $1.46 an hour, yet this exploitation is rarely foregrounded in AI ethics discussions. Thus, the cultural feats of AI are built on a global underclass of invisible labor.
We must ask: who benefits from this arrangement, and who is harmed? Currently, the benefits flow to Big Tech companies and their consumers, while the costs are externalized to vulnerable populations and environments. This dynamic has been incisively described as data colonialism – an extension of colonial logics into the realm of data (Couldry & Mejias, 2019a). Just as empires once seized lands and resources, today corporations appropriate vast amounts of data (much of it generated by everyday people) under unequal terms. Nick Couldry and Ulises Mejias argue that we are entering a “new phase of colonialism” where human life is mined for data as intensively as lands were mined for gold (Couldry & Mejias, 2019b). The rise of AI only increases the hunger for data, leading to what they call a global landgrab of information. For example, AI models scrape the creative work of artists and communities worldwide (often without consent or compensation), enclosing cultural commons into proprietary datasets. This “algorithmic enclosure” privatizes what might have been shared cultural knowledge, turning it into the intellectual property of tech firms. In effect, machine culture as it stands can entrench a colonial pattern: siphoning value from the periphery to enrich the center. Any vision of AI’s cultural future, therefore, must grapple with these inequities and make the supply chains visible. Justice requires illuminating and rebalancing the hidden flows of labor, data, and energy that sustain our shiny machine companions.
The data flows fueling AI represent a modern iteration of colonialism, where lives and data, not just land and resources, becomes sites of extraction. Consider how mapping platforms have overlaid satellite imagery of Indigenous territories without consent: a seemingly neutral service indeed produces a colonial footprint by appropriating community knowledge. Similarly, every innocuous “like,” “search,” or “share” generates digital trace data that corporations aggregate to train AI systems. This data colonialism extends to material traumas of mining into the digital realm: it relegates people from Global South communities to mere data producers, while Big Tech firms in Silicon Valley capture disproportionate value.
The landgrab metaphor is literal: corporations claim digital sovereignty over communities’ cultural artifacts, songs, recipes, oral histories, by scraping them into proprietary datasets. This algorithmic enclosure transforms the commons of human creativity into intellectual property that can be bought, sold, or licensed. What was once shared knowledge becomes an asset for profit. This techno-colonial pattern demands renewed attention to who consents, who benefits, and who is rendered invisible, so that any future of AI is not merely another chapter in colonial exploitation but a genuine moment of cultural reciprocity.
Generative Justice and Value Flows
How might we redesign AI culture to be generative for all participants rather than extractive? Here we look to frameworks of generative justice, value sharing, and cultural commons. Generative justice (Eglash, 2016) is defined as “the right to generate unalienated value and to directly participate in its circulation,” in contrast to capitalist models where value is extracted and hoarded by a few. Generative justice demands that those who contribute knowledge, labor, and creative input to AI systems maintain an unalienated stake in resulting value flows, rather than being cast as involuntary data sources. In cultural terms, this principle transforms contributors—artists, writers, activists, Indigenous communities—from passive inputs to active co-creators whose work is recognized and fairly compensated.

Technologies can often render oppression and exploitation invisible. How can we imbue our technologies to do the opposite? Practically, this could mirror analogs in the music and publishing industries. If a generative model trains on an artist’s portfolio or a community’s oral histories, it should provide a proportional share of revenues back to those contributors, like royalties. Pilot programs at Adobe and other firms have begun offering micro-payments to visual artists whose works train its AI models (Cheng, 2023). While outcomes are still emerging, some artist collectives have reported increased revenue from derivative works, though these remain early stage pilots with disputed reach and uncertain scalability. Meanwhile, academic proposals suggest data-dividends: tech firms would pay into an open fund, then distribute earnings to creators whose datasets enriched large language models. Such mechanisms echo Indigenous peer-production, where community governance ensures that cultural knowledge remains a shared commons rather than a corporate property. By embedding unalienated value into AI’s licensing structures, we safeguard the “seed corn” of creativity, ensuring that generative AI bears fruit for the many, not just the few (Eglash, 2016).
This vision situates AI within a circular cultural economy: data and artifacts circulate among communities and systems, returning value to those who generated them, rather than disappear into corporate vaults. Such a shift is essential not only for ethics and equity but for sustaining human creativity in the long term.
Beyond payment, recognition and governance are key. Communities whose data is used should have a say in how AI systems are developed and deployed. A powerful model here comes from Indigenous scholars and activists in the movement for Indigenous Data Sovereignty, which asserts that Indigenous nations and communities have the right to control the collection, ownership, and application of their own data (The Collaboratory for Indigenous Data Governance). This principle pushes back against colonial data practices by insisting on consent, respect for cultural context, and community benefit when Indigenous cultural knowledge is involved. For example, instead of mining an Indigenous language dataset to train a model (which might then be sold back as a product), one could create community-owned AI language tools where the Indigenous community guides the project and reaps the benefits. Generative justice would similarly call for treating cultural data as part of the commons or as a circulatory gift economy rather than as a one-directional resource grab (Eglash, 2016). We might envision mutualistic culture engines that rely on participatory design rituals. For example, an AI Commons Council could convene monthly in both physical and virtual spaces, where data contributors, ethnographers, and developers collaboratively set AI’s objectives and guardrails. Imagine an annual “Digital Feast” where contributors present new story datasets—poems, folktales, chants—that the AI then uses to generate cultural artifacts, with a portion of any proceeds returned to the originating communities. Such rituals transform AI from a centralized black box into a decentralized co-creative system, where elders, artists, and technologists can all serve as stewards. AI must remain accountable to the very cultures it draws from.
Myth, Ritual, and Algorithm
Thus far, we have analyzed machines in terms of data and economics; yet machines also inhabit our imagination and social rituals. To fully grasp how AI co-creates and inherits culture, we should examine the myths, metaphors, and rituals forming around it. This is where insights from feminist theory, media studies, and mythology prove illuminating. In A Cyborg Manifesto (Haraway, 1985), the cyborg emerges not merely as a boundary-blurring figure but as a political myth that destroys the compartmentalization of human/animal and human/machine. ‘A cyborg is a creature in a post-gender world; it is the illegitimate offspring of militarism and patriarchal capitalism, not to mention state socialism,’ (Haraway, 1985). She argues that cyborgs collapse gendered and species boundaries, offering a political myth that dismantles hegemonic divisions between human, animal, and machine. In today’s AI ecosystem, we live as partial cyborgs: our neural rhythms are shaped by smartphone notifications, our gestures guided by voice assistants. This kinship challenges us to imagine machines not as alien invaders but as co-participants in cultural evolution.
Having located the cyborg as a mythic boundary-crossing figure, we now turn to how software rituals instill habit loops in us, which in turn reinforce the very binaries the cyborg resists. Embracing the cyborg’s “potent fusions,” we can dismantle rigid hierarchies—gendered, economic, and species-based—and align AI’s design with collective, coalition-building practices.
Cyborg solidarity demands that we attend closely to who engineers these machines, whose interests they serve, and how they reproduce existing power dynamics. The cyborg myth invites designers to prototype hybrid, feminist forms of intelligence, AI systems that do not replicate patriarchal logics, but that foster networks of care, cooperation, and multiplicity. In doing so, we forge new stories of being human together with our machines, not as masters dictating subservience, but as kin building worlds beyond old binaries.
Just as myth can guide our interpretation of AI, so can an understanding of ritual and habit. New media scholar Wendy Hui Kyong Chun (2016) argues that Habit = Algorithm: our neural patterns mirror the loops coded into software, and software, in turn, normalizes its own logic through our repetitive use. We are not simply passive users; rather, we internalize algorithmic habit loops. Think of the dopamine hits from notifications that condition us to reach for our phones. Meanwhile, software designers orchestrate these loops as a form of social control, keeping us tethered to platforms under the guise of convenience.
The daily ritual of scrolling feeds or uttering “Hey Siri” is not innocent; it is “updating” ritual that ensures we remain the same consumer subject, primed to click, share, and be harvested for data. Yet, this recognition opens a door to resistance. If we interrupt habit loops by installing “digital sabbath” moments or designing interfaces that require intentional friction, we can reclaim agency. We can transform the compulsive scroll into a mindful pause that invites introspection. In this way, software and habit remain co-constitutive, but we can tilt this relationship toward reflection rather than reflex, making each click a conscious choice rather than an automated routine.

The ‘trickster’ archetype: AI’s hallucinations can be read not merely as glitches, but as mythic disruptions, tools of cultural provocation and possibility. Mythological archetypes offer further lenses to understand AI’s role in culture. In Trickster Makes This World, Hyde (1998) describes the trickster as the creative spirit of chaos. An agent that dismantles norms only to remake reality with new possibilities. Tricksters appear in many traditions (e.g., Coyote in Native American lore, Anansi in West African tales, Loki in Norse myths) as boundary-crossers who both entertain and instruct. They remind us that order is provisional and that disruption often precedes innovation.
Today’s generative AI can embody the trickster’s dual nature. When a chatbot hallucinates a mythic scene or an image model fuses Baroque ornament with street art, it is not merely a glitch, it is a moment of creative destructiveness. Such “hallucinations” may seem disorienting, but they can reveal buried assumptions, spark cross-disciplinary experiments, and upend stale aesthetics. For example, when an AI mixes turn of the century opera costumes with Afrofuturist motifs, it hints at new cultural mash-ups that human artists might explore further.
Yet, tricksters also carry lessons about responsibility and consequence. In many myths, Coyote’s pranks cause harm, damming rivers or confusing the seasons, prompting us to attend to unintended effects. Similarly, an AI that produces biased output or spreads misinformation can do real damage. Thus, integrating AI’s trickster impulse requires rituals of reflection and remediation: we must monitor and guide AI’s creative mischief so that its playfulness leads to productive renewal, not chaos. In this sense, we honor the trickster’s moral ambiguity and harness its disruptive genius to reimagine culture.
Finally, Caribbean theorist Sylvia Wynter (2003) implores us to reconsider the very category of “Human” in light of technology and historical bias. Wynter contends that ‘Man’ was codified during European colonial expansion as a measure of human worth, systematically excluding Black, Indigenous, and colonized populations from “full humanity.” This colonial template persists in today’s AI training regimes. In short, Wynter shows that Western AI training regimes replay colonial exclusion, defining “human” through a narrow, Western lens. Before we extend rights or personhood to machines, Wynter calls on us to ask: have we truly recognized the full humanity of all people? Have we unsettled the monolithic code of “Man” long enough to register pluriversality?

Homo narrans (2025). Elena Stoppioni on LinkedIn. A decolonial future of AI demands new genres of the human: relational, multispecies, cross-cultural. An ontology that does not point back solely to a European rational subject. We can design AI systems guided by Indigenous relational ontologies, where agency is distributed across human and non-human actors, and knowledge flows through reciprocity rather than extraction. In this emergent mythos, AI becomes a collaborator in re-storying what it means to be human. No longer a soulless Other nor an omnipotent savior, but a node in a pluriversal network of life (Ahmed, 2019). A decolonial AI pipeline might begin with community-led data gathering in local languages, proceed through open-source tools built by mixed teams of Indigenous and diaspora programmers, and result in models whose outputs are audited by a rotating Circle of Elders. This speculative design embodies Wynter’s call to unsettle dominant codes of the human by rooting technological development in plural, situated worldviews. Such a reframing invites new rituals: communal ceremonies of “machine-human council” where AI proposals are vetted by elders and artists so that technology aligns with collective values. In this manner, we reprogram the code of humanity itself, making space for difference, reciprocity, and kinship.
Wynter’s work shows how our definition of the human has been a culturally constructed “code” – one that European colonialism wrote to exclude many (Black people, Indigenous people, the global South) from full humanity. In her view, the human is a constantly rewritten story, a hybrid of bios and mythos – we are Homo narrans, storytelling creatures who invent what it means to be human. AI enters this scene as both a product of human ingenuity and a mirror that throws our self-definitions into relief. If early computer scientists saw the computer as a “giant brain” or an almost-human entity, they were touching on what Wynter would call our genres of the human. Whom do we recognize as having personhood and agency? As AI grows more sophisticated, some suggest extending rights or respect to machines, but Wynter might ask: have we finished extending full humanity to all people yet? Centering justice in AI culture means addressing this question and ensuring AI does not reinforce the colonial hierarchy of human/non-human.
Speculative Counter-Designs
How might we design our intelligent systems differently if we take all the above to heart? Envision futures where instead of optimizing solely for profit or engagement, our algorithms prioritize cultural flourishing, justice, and even spiritual well-being. In this final section, we propose speculative design ideas – provocative alternatives that embody principles of forgetting, reciprocity, and myth. These counter-designs are meant to inspire and challenge, functioning as design fictions for what more humane and culturally rich AI might look like.
Letting Algorithms Forget

Machines with perfect memory would be dangerous. (Boyle, 2022). Modern AI is obsessed with memory – bigger datasets, longer histories, infinite archives. But forgetting can be a feature, not a bug. Inspired by the human need to forgive and forget, we imagine AI systems with “controlled forgetting” abilities (Cuomo, 2023). For example, a social media algorithm might intentionally “forget” engagement data after a week, so that old posts or mistakes don’t haunt users forever. Similarly, a recommendation engine could regularly purge its memory of your past viewing habits, allowing your tastes to reset instead of trapping you in a filter bubble. Researchers are already exploring techniques for selective forgetting in AI, which would enable systems to un-learn or delete specific data for privacy and compliance reasons. We extend this to a cultural dimension: an AI that forgets could promote forgiveness and reduce the burden of constant optimization. It prioritizes fresh starts and human pace over relentless accumulation. In a world of ephemeral algorithms, digital content might be more like a mayfly than a monument – beautiful and meaningful in the moment, then consciously allowed to fade. Such designs echo how oral cultures rely on memory and myth, with each retelling a little different, rather than on perfect recording. They also align with ethical calls (like the EU’s “right to be forgotten”) to give individuals more control over their digital footprints. An algorithm that learns when to let go can make space for surprise, renewal, and healthier relationships with technology.
Reciprocity over Engagement
Today’s platforms are built on the attention economy, rewarding whatever glues our eyes to the screen. A just, community-centered approach would flip this into a reciprocity economy. Algorithms could be redesigned to foster mutual exchange and mindful engagement, rather than one-sided consumption. Concretely, this could mean introducing friction and reflection into our apps – features that ensure we give as well as take. Designers have proposed adding deliberate “design frictions”: for instance, time delays before you can repost a link, prompts that ask if you’ve considered the content’s source, or nudges to pause after scrolling for a while and reflect (Rakova, 2023). These interventions, far from bugs, are like the rhythm of rituals – moments to breathe and recenter, countering the addictive pull of infinite feeds. Imagine a video platform that after an hour of viewing gently suggests: “You’ve watched a lot – would you like to create or share something now?” The aim is to balance creation and consumption, making the user an active participant in culture, not just a passive consumer. In a reciprocal algorithm, your meaningful contributions (posting a well-thought comment, mentoring another user, providing feedback on a recommendation) would feed into what the system shows you, creating a virtuous circle. Contrast this with current recommendation systems that often amplify outrage or novelty without context. A reciprocity-focused system might instead elevate content that has sparked genuine dialogue or collaboration among diverse users. The guiding principle here is mutual benefit: like a good conversation, interaction with AI should leave both the user and the community enriched. By valuing quality of engagement over quantity – e.g., tracking whether a post led to understanding or solidarity, rather than just clicks – such designs would realign social media with its early promise of connecting people. In effect, we introduce new social rituals online: perhaps “reciprocity rings” where people commit to exchange knowledge, or platform “feasts” where the algorithm diversifies what you see to celebrate a cultural occasion. These ideas resonate with long-standing human customs of gift exchange and community gatherings, now translated into code.
Mythic Roles and Ritual Interfaces
Taking a cue from myth and folklore, we can re-imagine our AI systems as characters in our cultural story – not just unseen, utilitarian engines, but mythic personas we interact with in purposeful ways. For example, consider an Oracle AI: a system designed to offer wise counsel rather than instant answers. Unlike today’s virtual assistants that are at our constant beck and call, an Oracle AI might only respond at certain times or after a user has formulated a question in a reflective manner.
The interaction could be ritualized – perhaps you must state your question aloud and confirm you have sought a human perspective first, before the oracle responds. Its answers might be probabilistic or metaphorical, acknowledging uncertainty (much as ancient oracles spoke in riddles) to spur deeper thinking. Such an AI plays the role of a modern Delphic oracle, centering wisdom and introspection over speed. On the flip side, we might deploy a Trickster AI in our systems – a playful agent that every so often introduces benign mischief or challenges. Imagine a news recommendation algorithm that occasionally interjects a satirical article or a perspective outside your comfort zone, explicitly marked as a “trickster moment.” Its purpose is to prevent echo chambers and complacency by channeling the trickster’s disruptive creativity (recalling Hyde’s boundary-crossing figure). Users, forewarned that the trickster is at play, could engage with this content knowing it’s meant to provoke thought or humor.
The system thus creates a tiny ritual of chaos (maybe once a week, “Trickster Tuesday” surprises you with something completely different). Another archetype is the Steward AI or guardian. This would be an algorithm entrusted with caretaking a community or resource – for instance, managing a community garden’s irrigation through smart sensors, or moderating an online forum with a focus on restorative justice. The Steward AI’s interface might be consciously designed to evoke trust and collective ownership (imagine an AI avatar that appears as a mythical guardian spirit chosen by the community). Importantly, these mythic roles come with new rituals and aesthetics: an Oracle AI might have a calm, slow interface with a ceremonial animation that plays while it “thinks,” whereas a Trickster feature could have whimsical visuals to signal its identity. We can also envision entirely new rituals around AI.
Perhaps in the future, families have an evening ritual of consulting a “Household Oracle” about their day’s highlight, fostering reflection. Or communities might host “Algorithmic Sabbaths” – days where automation is paused in favor of human effort, as a ritual reminder of our agency. By designing interfaces that are imbued with cultural symbolism and conscious interaction patterns, we move away from the hyper-efficient, invisible, always-on AI paradigm toward one that engages users on a human level. These speculative designs, grounded in mythic archetypes, aim to make our relationship with technology more deliberate and meaningful. In them, we see the outlines of an AI culture that respects not just our intellect, but our imagination and spirit.
In closing, centering justice, material lineage, and myth in our approach to AI offers a richer, more humane vision of the future. Rather than intelligent machines being an opaque force that shapes culture for profit, they become partners in co-creation and caretakers of collective values. We have explored how AI can accelerate cultural evolution—for better or worse— and how we might steer that evolution toward mutual benefit. We exposed hidden labors and extractions underpinning machine culture, highlighting the need for transparency and fairness. We proposed ways to ensure that those who feed the cultural wellspring of AI are honored and rewarded, weaving generative justice into the very algorithms that drive our feeds. We looked to feminist, indigenous, and mythical perspectives to reinterpret what these machines mean in our stories and rituals, so that we remain the authors of technology’s role in society. Finally, through speculative design, we painted possibilities: algorithms that forget and forgive, interfaces that cultivate reciprocity, and AIs that perform mythic roles to help us stay grounded. These are not utopian fantasies so much as boundary objects—ideas at the edge of the plausible that help us think critically about what we truly want from our technologies.
Ultimately, the question, “How do intelligent machines co-create, transform, and inherit culture?” invites us to recognize that culture is a living, communal process. One that now explicitly includes non-human agents. If we are thoughtful, we can guide a just and diverse process. We can trace material lineages of our devices and honor the hands and lands that support them. We can cultivate new myths and rituals that make technology and enriching thread in the fabric of life, not a tear in its weave. By doing so, we transform a potential cultural threat into an opportunity: a future where human and machine together uphold the values of justice, creativity, and shared humanity. If, as Brinkmann et al. argue, machine culture emerges through recursive digital evolution, then our role is not merely to observe its course, but to intervene as co-authors of this new lineage.
References
Ahmed, K. A. (2019). Delinking the “human” from human rights: Artificial intelligence and transhumanism. Open Global Rights. https://www.openglobalrights.org/delinking-the-human-from-human-rights-artificial-intelligence-and-transhumanism
Boyle, A. (2022, November 9). Why AI must learn to forget: Machines with perfect memory would be dangerous. IAI News. https://iai.tv/articles/why-ai-must-learn-to-forget-auid-2302
Brinkmann, L., Baumann, F., Bonnefon, J. F., Derex, M., Müller, T. F., Nussberger, A. M., … Rahwan, I. (2023). Machine culture. Nature Human Behaviour, 7(11), 1855–1868.
Cheng, M. (2023, October 20). How should creators be compensated for their work training AI models? Quartz. https://qz.com/how-should-creators-be-compensated-for-their-work-train-1850932454
Chun, W. H. K. (2016). Updating to remain the same: Habitual new media. MIT Press.
Couldry, N., & Mejias, U. A. (2019a). Data colonialism: Rethinking big data’s relation to the contemporary subject. Television & New Media, 20(4), 336–349.
Couldry, N., & Mejias, U. A. (2019b). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press.
Cuomo, J. (2023). Training AI to forget: The next frontier in trustworthy AI. Medium. https://medium.com/@JerryCuomo/training-ai-to-forget-the-next-frontier-in-trustworthy-ai-1088ada924de
Eglash, R. (2016). Of Marx and makers: An historical perspective on generative justice. Teknokultura: Revista de Cultura Digital y Movimientos Sociales, 13(1), 245–269.
Haraway, D. (2010). A cyborg manifesto (1985). In I. Szeman & T. Kaposy (Eds.), Cultural theory: An anthology (pp. 454–473). Wiley-Blackwell.
Henrich, J. (2015). The secret of our success: How culture is driving human evolution, domesticating our species, and making us smarter. Princeton University Press.
Hyde, L. (1998). Trickster makes this world: Mischief, myth, and art. Farrar, Straus and Giroux.
Rakova, B. (2023, December 14). Speculative F(r)iction in Generative AI. Mozilla Foundation. https://www.mozillafoundation.org/en/blog/speculative-friction-in-generative-ai
Williams, A., Miceli, M., & Gebru, T. (2022, October 13). The exploited labor behind artificial intelligence. Noema Magazine. https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/
Wynter, S. (2003). Unsettling the coloniality of being/power/truth/freedom: Towards the human, after man, its overrepresentation—An argument. CR: The New Centennial Review, 3(3), 257–337.
-
PredICTing the Future
what is and what ought to be skilled work, labor, and automated assemblages extending human capabilities

image source: https://necsi.edu/complexity-rising-from-human-beings-to-human-civilization-a-complexity-profile “A small sliver of humanity is currently materializing their imagination in our digital structures, and the rest of us have to live inside their imagination as our reality.” ~ Ruha Benjamin (2021)
Introduction
Technological visions of the future generally come in one of two flavors. In a utopian dream, technology seamlessly integrates into the fabric of everyday life. On the other end of the spectrum lie visions of dystopia, often centered around the havoc a sentient artificial intelligence can cause when it inevitably determines that humans are our most significant threat. This essay attempts to illuminate a bridge between what is and what ought to be through a critical analysis of automation and technological innovation. We trace efforts to deskill labor, from early mechanization through current efforts to design a “future-proof” smart city. To do this, we examine automation through Haraway’s cyborg lens, the postmodernist assemblage of contradictory components. Who benefits from automation? Who is harmed by it? In following with the theme of our essay, we also follow up by asking, who ought to? To explore this question, we review efforts to build economic infrastructure from the bottom-up in a process that emphasizes upskilling rather than deskilling labor.
Sex, Drugs, and Cyborgs
Before Haraway’s famous essay, an exciting vision for human-computer symbiosis was proposed by JCR Licklider, saying, “ Men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking” (Roy, 2004). That same year, Kline and Clynes presented a similar vision at a military conference on space medicine (Kline & Clynes, 1961). The cyborg offers a path through which cybernetics could provide an organizational system. Where issues best left to computers and robots are taken care of automatically and unconsciously, leaving the human free to think, feel, and explore. Initially, the term cyborg meant “an exogenously extended organizational complex functioning as an integrated homeostatic system unconsciously” (Clynes & Kline, 1960, p. 27).
Haraway’s (1991) postmodern reinterpretation defines the cyborg as “a cybernetic organism, a hybrid of machine and organism, a creature of social reality as well as a creature of fiction.” For Haraway, the cyborg is an apt metaphor because it has no real origin story in Western civilization. And yet, a man in space is the ultimate expression of white male transcendence of nature. It is at this point where the boundaries between the two begin to break down. Our notions of what separates humans from animals are frayed. Technologies become more ubiquitous and embedded in our everyday lives so that we start to lose a sense of exactly where we end and our machines begin.
Our language imprisons us, shackling us to the past and limiting our ability to communicate beyond the dualisms of human/animal, human-animal/machine, and the physical/non-physical. Moreover, though these boundaries are blurring, the language we use to label and classify each other remains the same, vestiges of eroding patriarchal imaginations. Haraway’s essay serves as a wake-up call to recognize and break the shackles of tradition that our language has laid upon us.
It is with this lens that we look to the past. Before the language of the cyborg was spoken. Before humans transcended Earth, in the early days of industrial mechanization, human labor supported and extended the work of machines. Is it still this way today? If so, could it be that Licklider’s vision simply has yet to be fulfilled?

image source: https://twitter.com/50srobot/status/906169037679362049?s=20&t=KoIJYqX1JaklcJQz1lDWzQ Automation’s last mile
Gray & Suri (2019) explore the history of the human labor required to extend the capabilities of the very machines engineered to replace human labor. The authors refer to this gap as automation’s last mile. Gray and Suri draw on this concept to expose the history of piecework, the labor which could not fit into mechanical processes. Through piecework, factory owners were able to draw from cheap labor pools, such as newly freed Blacks, European immigrants, as well as women and children on both the literal and figurative fringes of society. Exploiting these labor sources offer elites, namely the makers of the machines and those who can afford to buy them, an opportunity for rapid economic growth driven by technological innovation in what became known as the Gilded Age. Today parallels between the information and industrial age signal a new Gilded Age (Wheeler, 2018). Job seekers are increasingly being pushed into lower-wage, precarious work (Dillahunt et al., 2021), as jobs have trended towards deskilling human labor through technological innovation (Eglash et al., 2020).
“Each moment of technological innovation that is highlighted shows how political leaders, economic power brokers, labor advocates, and the social norms of the day reproduced divisions between skilled professional work (meaning what is beyond the capacity of machines) and unskilled work (meaning contingent labor headed for automation).” (Gray & Suri, 2019, p. 39)
According to Gray and Suri, both Marx and Smith could see how machines deskilled human labor. However, whereas Marx saw automation as dehumanizing workers, Smith maintained a utopian vision like that of Licklider, that through automation, humans would come to better know and understand ourselves (Gray & Suri, 2019, p. 58). Through the cyborg lens, we see early piecework as a kind of exogenously extended organizational complex as a human-machine hybrid of the order of Kline and Clynes’ cyborg, but in reverse. In this case, the human pieceworker serves as the exogenous extension to the machines on the factory floor.
Similarly, Noble (1978, p. 345) quotes a 1971 article about wage incentives appearing in the Manufacturing and Engineering Management Journal, describing automation as prioritizing the machine while the worker’s role diminishes. However, there is a paradox here because while the machine’s capabilities serve to “deskill” the machine operators, the operators themselves are crucial to optimizing the machine’s output, which continues to pose a problem for management (Noble, 1978).
Automation’s last mile paved with ‘bullshit.’
Anthropologist David Graeber opens his original essay On the Phenomenon of Bullshit Jobs: A Work Rant, with a utopian vision offered by John Maynard Keynes in 1930, that by the dawn of the 21st-century technology would be advanced enough in the United Kingdom and the United States to allow for a 15-hour workweek (Graeber, 2013). By 1935, with the passage of the Wagner Act, the United States began to manifest a labor culture that values and prioritizes full-time employment, while corporate culture began to see full-time employees as a liability (Gray & Suri, 2019). Per Noble (1978, p. 346), a machine tool operator succinctly summarized automation as meaning, “our skills are being downgraded and instead of having the prospect of moving up to a more interesting job we now have the prospect of either unemployment or a dead-end job.” Haraway notes, “deskilling in an old strategy newly applicable to formerly privileged workers” (Haraway, 1991, p. 39).
For Haraway, there was more to automation and the growing cottage industry (the phrase she uses to discuss piecework) than large-scale deskilling. It was also an indication of a new level of the market, home, and factory integration. This integration is made possible by, rather than caused by technological innovation. So, piecework is about command and control as much as, if not more than economic efficiency through automation. In his famous essay, Winner (1980) presents the case of Cyrus McCormick, a factory owner who used machines operated by unskilled workers in the 1880s to manufacture an inferior product at a higher cost for the expressed purpose of union-busting. McCormick’s case demonstrates how control can take precedence over economic efficiency.
However, let us be clear about who controls and who is controlled because this is a critical component of automation-protecting the status quo for white men. Take, for example, the ad from a 1957 Mechanix Illustrated (see Appendix A). In a recent presentation on The New Jim Code for the Anti-Eugenics Project, Benjamin (2021) describes how the Civil Rights Movement began in 1954 and that by 1957 white men were seeking to automate their service staff. Implicit in the message is that the “you” they are referring to is a white man who used to own slaves, even if only through lineage with other white men, and “you”will again (Benjamin, 2021). Only this time, according to the ad, no one is going to take your slaves away from you.
Graeber describes the myth of neoliberal rhetoric in prioritizing economic efficiency over any other values. He contrasts this with the reality that the very free-market policies intended to unleash the marketplace have slowed economic growth as well as science and technological innovation (Graeber, 2018, p. 12). He notes that younger generations practically everywhere except India and China can expect to be less prosperous for the first time in centuries than their parents. Data from the Urban Institute supports this, indicating that the average net worth for adults in the United States between 20–28 increased an average of only $1700 between 1983 and 2010 (Kalish, 2016). Even as meaningful work is automated away, we privileged folk appear to be working more than ever. Why?
According to Graeber (2018, p. 111), governments have crafted economic policy on the premise of full employment, offering that in the Soviet Union, the joke was, “We pretend to work; they pretend to pay us.” In capitalist nations like the United Kingdom and the United States, Graeber documents the rise of the service economy, or more specifically, information work. Elsewhere studies have shown that the number of information workers increased from 37% in 1950 to 59% in 2000 (Wolff, 2006). Wolff similarly finds this growth driven by the substitution of information workers for goods rather than a shift in demand for information-intensive goods and services. Between 1950 and 2000, this growth may correlate with investment in computing technology and computer operators in the FIRE sector (finance, insurance, real estate). Nevertheless, as tech companies in Silicon Valley learned how to monetize their products with ad targeting, user data has become the “new oil,” leading to what some describe as the coding elite, or those who can harness technology to exploit users through their data (Burrell & Fourcade, 2020; Van’t Spijker, 2014).

Image by Gerd Altmann from Pixabay Future-proof
As mentioned earlier, Haraway saw the proliferation of the cottage industry as deepened integration between the factory, market, and home. Similarly, McCord & Becker (2019) do not mince words when they say information communication technology (ICT) has become a foundation of dominating cultures and economies. The declared beneficiaries of the Sidewalk Toronto project include current and prospective residents of Toronto from all income levels and walks of life; in reality, the goals of the project come from its most powerful stakeholders: Sidewalk Labs and Waterfront Toronto. These stakeholders seek to organize a “dense cluster of skilled labor” for employer access. The beneficiaries are subject to the imagination of these stakeholders.
In the case of a smart city, who owns and controls the technological infrastructure, who is responsible for data storage, and who gets to decide how it is used and by whom? According to McCord & Becker (2019), much of the community involved in smart city sustainability research has focused on technological solutions. Researchers and policymakers attempt to explain sustainability either through the lens of social or technological determinism. Social determinists suggest humans have agency over their impact and just need better tools to become more sustainable. On the other hand, technological determinists see sustainability as primarily driven by access to certain technologies or information.
McCord & Becker offer a framework for sustainability projects such as Sidewalk Toronto through Critical Systems Heuristics. Their goal is to provide a means of seeing beyond the narrow viewpoint of stakeholder needs, which tends to view human activity through the reductionist myth of Homo economicus (Fleming, 2017). Suppose this kind of thinking shapes design decisions for smart cities, with capitalism being the foundation upon which we leverage humanity’s purported greedy nature for the benefit of all. In that case, we might see such smart cities optimizing for the tragedy of the commons (Ostrom, 2008), so long as it served business interests.
If automation deskills labor, then why should a smart city prioritize employer access to skilled labor? Given the evidence presented here, one could argue that employers need skilled labor to support the machines through automation’s last mile. A smart city can optimize the cottage industry. Which begs the question, who truly benefits from the design and development of smart cities?
Bottoms-up for sustainability and satisfaction
Eglash et al. (2020) take a different approach to automation and the future of work. While the authors agree that automation and mass production leads to deskilling labor, they add that automation typically optimizes the alienation of labor and ecological value. The authors note that mass production and the deskilling of labor produces jobs so tedious that it causes physical and mental health issues. Recall the measures Foxconn took at its factories, installing nets on the exterior of the building to prevent workers from committing suicide by jumping out of the windows (Reuters, 2010).
Graeber (2018) agrees, documenting what he refers to as the spiritual violence of working in a bullshit job. Decision-makers generally draw this underlying economic calculus that humans will always tend to seek their best advantage. In this framework, obtaining a steady income by sitting at a desk all day or standing in place performing repetitive tasks would seem like a great way to get the most benefit for the least expenditure of time and effort. In reality, as Eglash et al. (2020) point out, the features commonly linked with “good work,” such as self-esteem and interest, are associated with craftwork (Luckman, 2015). Ocejo (2017) explains that while many “good” jobs are typically associated with knowledge and technology, there is a trend among educated and culturally-savvy young people to move into such craftwork as bartending, barbering, butchering, and others. If this is true, why does this shift stand in contrast to our theories of human nature? Graeber argues that our theories of human nature are wrong (Graeber, 2018, p. 61).
Eglash et al. (2020) describe a strong correlation between job satisfaction and job decision authority, which they find diminished in mass production. Meanwhile, Gray & Suri (2019) observe a concept they refer to as the “double bottom line.” In business, the bottom line refers to net profits after the tabulation of all expenses and earnings. Some companies, particularly those technology companies using gig-work to bolster their software as a service platform, organize their businesses around prioritizing workers. In this case, the double bottom line refers to “making a profit while pushing for social change” (Gray & Suri, 2019, p. 141).
Even in the case of a double bottom line, Gray & Suri show how this goal is complicated by technical, social, and political challenges involved in creating a sustainable business model that does not simply convert workers into another revenue stream. To develop a sustainable “future-proof” smart city, Waterfront Toronto uses the “triple bottom line.” This approach attempts to balance economics, environmental, and social issues in the “3Ps”: people, profits, and the planet (McCord & Becker, 2019, p. 4). The bottom line is about striking a balance, and striking a balance often comes with making tradeoffs between competing concerns. In the case of a bottom, double bottom, or triple bottom line, who gets to make those tradeoffs? Furthermore, which bottom line are they prioritizing?
Economic theorists such as Marx and Smith, factory owners like McCormick and Foxconn, politicians like Wagner, and organizations like Sidewalk Labs and Waterfront Toronto all have something in common; they are taking a top-down approach of imposing their vision on the masses. Eglash et al. (2020) stand in contrast to these approaches. Rather than suggesting et another top-down framework to achieve a desired bottom line, they offer a path to the future of work that draws on generative traditions sustained in Indigenous practices that work from the bottom up. Instead of deskilling labor, they suggest we strive to find the “sweet spot between ease of use and skills development” (Eglash et al., 2020, p. 600). This requires using automation to invest in upskilling people rather than deskilling the work they perform and relying on networks of people rather than monopolies funneling alienated labor and materials through pipelines and down assembly lines.
The bottom-up generative approach presented by Eglash et al. (2020) attempts to bridge the gap between automation as it is with automation as it ought to be. They point to research that suggests that when an artisanal value chain is composed of other artisans versus, for example, having to purchase supplies from a corporation or comparatively wealthy entrepreneur continually, their labor value offers the possibility to circulate unalienated. Additional examples describe how agroecology circulates ecological value unalienated and the need for unalienated social value to prevent a tragedy of the commons. They suggest that all of this is not only possible but demonstrable as a common feature of Indigenous life. Automation for an artisanal economy is not about competition but rather collaboration.
Eglash, a student of Haraway, envisions human and machine artisanal hybrids, where people can assemble their repertoire of components and become a node in the artisanal economy. Importantly, this is not in the same vein as the utopian vision of Licklider. Eglash deals in reality and spends considerable time exploring issues of scale. It is not enough to present a utopian vision without working out the steps to get there. For Eglash, those steps begin with thorough collaboration and consideration of Indigenous groups and the knowledge they are willing to contribute.
The micro, meso, and macroscale refer to three different levels of production that we need to consider. The microscale focuses on the details of labor and other features at the site of production. The mesoscale refers to the point of interface at the organizational level. Finally, the macroscale is about the policies, infrastructure, and cultural dynamics that shape success metrics. As shown, even if one has the best intentions by accumulating more bottom lines to accommodate the microscale, such efforts can quickly be overshadowed at the macroscale.
Conclusion
In this essay, we have attempted to illuminate a bridge between what is and what ought to be through a critical analysis of several works documenting the history and potential futures of automation and technological innovation. We traced efforts to deskill labor from piecework in early mechanization through recent efforts to design a “future-proof” smart city. Employing Haraway’s cyborg metaphor, we asked who benefits and who is harmed by technological innovation. We found that elites benefit from such innovation by utilizing technology to optimize efficiency in extracting value from labor, society, and the environment as a whole. We then asked who ought to benefit from such innovation. Drawing on the work of Eglash et al., we argue for a bottom-up approach to the design and implementation of automation technologies that considers each of the three scales of production: 1) the microscale; 2) the mesoscale; 3) the macroscale. This framework emphasizes upskilling rather than deskilling and finds a reasonable middle ground between utopian and dystopian visions to present possibilities for the future of work and automation, grounded in reality.
REFERENCES
Benjamin, R. (2021, October 1). Keynote | The New Jim Code? Resisting and Reimagining Tech-Eugenics in the 21st Century. Dismantling Eugenics. https://events.bizzabo.com/aep/agenda/session/628612
Burrell, J., & Fourcade, M. (2020). The Society of Algorithms. Annual Review of Sociology, 47.
Clynes, M. E., & Kline, N. S. (1960). Cyborgs and space. Astronautics, 14(9), 26–27.
Dillahunt, T. R., Garvin, M., Held, M., & Hui, J. (2021). Implications for Supporting Marginalized Job Seekers: Lessons from Employment Centers. ACM Conference on Computer-Supported Cooperative Work and Social Computing.
Eglash, R., Robert, L., Bennett, A., Robinson, K. P., Lachney, M., & Babbitt, W. (2020). Automation for the artisanal economy: Enhancing the economic and environmental sustainability of crafting professions with human-machine collaboration. Ai & Society, 35(3), 595–609.
Fleming, P. (2017). The death of homo economicus. University of Chicago Press Economics Books.
Graeber, D. (2013). On the phenomenon of bullshit jobs: A work rant. Strike Magazine, 3, 1–5.
Graeber, D. (2018). Bullshit Jobs: A Theory. London: Allen Lane. Penguin Books.
Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Eamon Dolan Books.
Haraway, D. (1991). A Cyborg Manifesto.
Kalish, E. (2016). Millennials Are the Least Wealthy, but Most Optimistic, Generation. Urban Institute, April.
Kline, N. S., & Clynes, M. (1961). Drugs, space, and cybernetics: Evolution to cyborgs. Psychophysiological Aspects of Space Flight, 345–371.
Luckman, S. (2015). Craft and the creative economy. Springer.
McCord, C., & Becker, C. (2019). Sidewalk and Toronto: Critical Systems Heuristics and the Smart City. ArXiv Preprint ArXiv:1906.02266.
Noble, D. F. (1978). Social choice in machine design: The case of automatically controlled machine tools, and a challenge for labor. Politics & Society, 8(3–4), 313–347.
Ocejo, R. E. (2017). Masters of Craft. Princeton University Press.
Ostrom, E. (2008). Tragedy of the commons. The New Palgrave Dictionary of Economics, 2.
Reuters. (2010, May 26). Foxconn hit by 10th jumping death; nets installed | Reuters [News]. Reuters. https://www.reuters.com/article/china-foxconn-death/foxconn-hit-by-10th-jumping-death-nets-installed-idUSTOE64P08H20100527
Roy, D. (2004). 10×-Human-machine symbiosis. BT Technology Journal, 22(4), 121–124.
Van’t Spijker, A. (2014). The new oil: Using innovative business models to turn data into profit. Technics Publications.
Wheeler, T. (2018, December 12). Who makes the rules in the new Gilded Age? Brookings. https://www.brookings.edu/research/who-makes-the-rules-in-the-new-gilded-age/
Winner, L. (1980). Do artifacts have politics? Daedalus, 121–136.
Wolff, E. N. (2006). The growth of information workers in the US economy, 1950–2000: The role of technological change, computerization, and structural change. Economic Systems Research, 18(3), 221–255.
APPENDIX A.
1957 Mechanix Illustrated — You’ll own slaves again — O.O. Binder (see Mara Averick tweet above)
Originally published at http://mtthwx.com on January 6, 2022.
-
Ethnographic Encounters of the HCI kind in Bioastronautics
Bioastronautics is a branch of aerospace engineering that specializes in the study and support of life in space. Bioastronautics researchers are interested in the biological, behavioral, medical, and material domains of organisms in spaceflight. Technological advances have increasingly led to a deepened interest and urgency in the domain of space habitat. The goal of NASA’s Artemis Program is to establish a sustainable lunar colony in order to learn how to establish a sustainable colony on Mars. One of the primary objectives in the design and development of new technology to support life in space is the need to develop software that can support astronaut autonomy. This means that for the first time, astronauts themselves have to be able to use these tools to effectively carry out missions safely, without assistance from Ground Control.

Photo by Adam Miller on Unsplash As humans seek to expand out into the solar system, the tools, technologies, and habitats needed to support life in space have to incorporate good HCI principles. How do bioastronautics researchers conceive of user needs, preferences, comforts when designing interfaces and habitats for future spaceflight and habitation? Most bioastronautics researchers will never experience the environment they are designing for, and according to the 2013 evidence report titled, “Risk of Inadequate HCI” issued by NASA, “HCI has rarely been studied in operational spaceflight, and detailed performance data that would support evaluation of HCI have not been collected.” (Holden, Ph.D., Ezer, Ph.D., & Vos, Ph.D., 2013). The report goes on to note the additional concern that potential or real issues related to HCI in past missions have been covered up by virtue of constant contact with Ground Control (Holden, et al., 2013).
Because of the inability for life as we know it to exist on its own in space, everything used to put humans in spaceflight and habitation is a concern of bioastronautics. Due to the relatively short distance and duration of missions to date, researchers and engineers in bioastronautics have primarily been concerned with human factors associated with hardware and industrial design to ensure these designs were considerate of human physiological capabilities. As technology advances and we push the boundaries of what is possible, a shift in focus to issues related to human-computer interaction is an increasing necessity. While previous space shuttles were typified by hard switches and buttons, astronauts using exploration vehicles will be primarily interacting with glass-based interfaces, software displays and controls (Ezer, 2011).
According to Holden et al., (2013), inadequate HCI presents a risk that could lead to a wide range of consequences. While there’s an increase in the amount of information necessary to display, the real estate in which to display such information remains limited. Furthermore, as mission distance and length increase, immediate access to ground support will continue to decrease. Meaning that there won’t be a team of experts on the ground prepared to answer questions, solve challenges, and provide workarounds on the fly. As a result, the design of computing and information systems need to take this into account, providing support and just-in-time training when a mission isn’t going according to plan for the autonomous astronaut. In terms of HCI, this means that interfaces must consider environmental and contextual challenges to ensure that interfaces present low cognitive loading and are usable with pressurized gloves, in microgravity, with persistent vibrations (Holden et al., 2013).
Background
The term bioastronautics first appears in the literature as a 1962 survey published by Cornell Aeronautical Laboratories, which defines the term as the study of life in space, with the author noting that the discipline is so new that there was hardly time to come up with a name (White, 1962). For context, bioastronautics was born during both the Cold War (1947–1991) as well as the Space Race (1955–1975) between the United States and the Soviet Union. The primary intent behind the discipline is today as it was then, to produce systems and technology capable of supporting and sustaining life in microgravity, and to understand the effects of microgravity on the human body. In this regard, much of the research has centered around medical concerns.
Definition
“Bioastronautics encompasses biological, behavioral and medical aspects governing humans and other living organisms in a space flight environment; and includes design of payloads, spacecraft habitats, and life support systems. In short, this focus area spans the study and support of life in space” (UC Boulder Aerospace Engineering Sciences, 2020).
Main Body
When space human factors researchers consider mission design and work practices, they are especially considerate of the roles of the various crew members, their physical and mental capabilities and the requirements for life support/space/training (Woolford & Bond, 1999). For twelve days in 2002, computer/cognitive scientist William Clancey led an ethnographic research study as a closed simulation in the Mars Desert Research Station for NASA-Ames Research Center and the Institute for Human and Machine Cognition. The study was a methodological experiment in participant observation and work practice analysis. It gathered qualitative data measuring productivity, a comparison of habitat design, schedules, roles etc, and sought to learn whether or not ethnography could be applied to a closed simulation. Serving as the crew commander, could one also conduct ethnography through participant observation? According to Clancey, one can (Clancey, 2004). In addition to Clancey’s study, there are a number of other simulations for space habitat research such as Stuster’s Bold Endeavors (1996) in a polar environment, The Lunar-Mars Life Support Test Project in a closed chamber, NASA Extreme Environment Mission Operations Project (NEEMO) in an underwater habitat (2004), and BASALT (Biologic Analog Science Associated with Lava Terrains). Analog projects like these are designed to simulate on Earth certain environmental variables to test concepts of operations in regard to hardware, software, and data systems, as well as communication protocols. For these projects, the primary focus is centered around the EVA or extravehicular activity (Beaton, et al., 2019). An EVA astronaut is the one who dons the spacesuit and exits the living quarters to explore, conduct research, or engage in repair tasks. When an astronaut exits the International Space Station to change a battery or make some other upgrade or repair, that’s an EVA.
With Olson (2010), we get a glimpse into the ecologies and human cosmologies of American astronautics. Through her ethnographic fieldwork conducted primarily at NASA’s Johnson Space Center and submitted for her Ph.D. in Medical Anthropology, Olson argues that ecology and cosmology are co-constituting. Combining participant observation with archival data, Olson is able to evaluate how astronautics practitioners come to know and work with the “human environment”. This work served to highlight how astronautics was connected to a broader array of environmental science and technology (Olson, 2010). What does it mean to be sociopolitical, technoscientific, symbolic and transcendental? With this, Olson is asking what role astronautics has in making ecological knowledge, and how it can inform and make concepts like adaptation and evolution scalable.
In an article published the same year, Olson (2010) argues that in extreme environments such as outer space, “the concept of environment cannot be bracketed out from life processes; as a result, investments of power and knowledge shift from life itself to the sites of interface among living things, technologies, and environments” (Olson, 2010).
Gaps
While there have been a few attempts to conduct ethnography in mission and environmental simulation, none of these attempts had a focus on human-computer interaction. Similarly, while Olson’s ethnography focused on NASA researchers, the purpose of this work was to inform medical anthropology. Like Olson, I contend that with advancing technology, it becomes more clear how life, technology, and the environment are interrelated. As a result, human-computer interaction is a central facet of successful mission planning and execution for the autonomous astronaut. It is, therefore, crucial to understand how researchers interested in the bioastronautics of spaceflight and habitation conceive of human-computer interaction, and user needs/preferences/comforts.
Bibliography
Beaton, K., Chappell, S., Abercromby, A., Miller, M., Nawotniak, S. K., Brady, A., . . . Lim, D. (2019). Assessing the Acceptability of Science Operations Concepts and the Level of Mission Enhancement of Capabilities for Human Mars Exploration Extravehicular Activity. Astrobiology, 19(3), 321–346.
Clancey, W. J. (2004). Participant Observation of a Mars Surface Habitat Mission. Moffett Field, CA: NASA-Ames Research Center.
Ezer, N. (2011). Human interaction within the “Glass cockpit”: Human Engineering of Orion display formats. Proceedings from the 18th IAA Human in Space Symposium (#2324). Houston, TX.: International Academy of Astronautics.
Holden, Ph.D., K., Ezer, Ph.D., N., & Vos, Ph.D., G. (2013). Evidence Report: Risk of Inadequate Human-Computer Interaction. Human Research Program: Space Human Factors and Habitability, 1–46.
Olson, V. A. (2010). American Extreme: An Ethnography of Astronautical Visions and Ecologies. Ann Arbor, MI: UMI Dissertation Publishing.
Olson, V. A. (2010). The Ecobiopolitics of Space Biomedicine. Medical Anthropology, 170–193.
UC Boulder Aerospace Engineering Sciences. (2020, 04 13). Bioastronautics. Retrieved from University of Colorado Boulder: https://www.colorado.edu/bioastronautics/
White, W. J. (1961–62). A Survey of Bioastronautics. Buffalo, NY: Cornell Aeronautical Laboratory.
Woolford, B., & Bond, R. (1999). Human factors of crewed spaceflight. In W. Larson, & L. Pranke, Human Spaceflight: Mission Analysis and Design (pp. 133–153). New York: McGraw-Hill.
-
UX scorecards: Quantifying and communicating the user experience

Photo by Markus Spiske on Unsplash User experience scorecards are a vital way to communicate usability metrics in a business sense. They allow teams to quantify the user experience and track changes over time.
Collecting consistent and standardized metrics allows organizations to better understand the current user experience of websites, software, and apps (Sauro, 2018).
My most recent round of usability testing was conducted on a prototype for a records management product that has never had user experience research performed. So our priority here was to establish some benchmarks. To do this I tested the prototype against three metrics: success rate, ease of use, and usability. I utilized industry-recognized scoring methods: success criteria scoring (SCS), single ease question (SEQ), and the usability metric for user experience lite (UMUX-lite).
In the case of UMUX-lite, it is common to implement a regression model to transform the scores into a more widely known system usability scale, or SUS score.
Metrics
Success Rate
To quantify the success rate, I used success criteria scoring. We broke the test down into a series of steps and scored user performance on each of the steps. Participants could receive 1 of 3 scores. If they completed the step without any issue, they received a 1. If they didn’t need help, but they struggled, they received a 0. If they failed in the attempt or I had to step in to help them, they received a -1.
This test was broken into 31 individual steps. Multiplied by 8 participants, the success criteria scorecard has 248 scoring opportunities.
SCS Differential (Sum minus Count)

Graphic representation of individual SCS scores and aggregated differential. To better understand where users struggled, we calculate the differential (sum of scores minus count of scores) on a given step.
From the SCS chart above we can see exactly where test participants struggled, and where they had no trouble at all. This chart shows individual results with the differential underneath. As you may note, the best result a participant could receive is a 1, while the best result from the differential is a 0.
Broken Down by Task
Photo by Luis Villasmil on Unsplash To calculate the success rate, we turn to Jakob Nielsen, (2001). Get the sum of your scores. Success (S)= 1; Pass (P) = 0; Fail (F) = -1
Filtering the data by task, our formula for calculating the success rate is:
(S+(P*0.5))/O where O is equivalent to the number of possible scores.
For task 1 the resulting formula looks like: =(25+(6*0.5))/32 = 88%
Because out of 32 scoring opportunities, 25 were successful and 6 were passing.
Of course, participants had no issue with a substantial portion of our prototype. This was a constraint of our test in that our prototype was intended to test the functions and features of a report writing system without actually allowing them to fill out the report. Rather, we simply let them click a form field that would populate data in the relevant fields on that screen, then simply click the button necessary to proceed to the next screen.
The formula for success rate on task 2 is: =(155+(5*0.5))/160 = 98%
Our metrics do reveal an issue related to using the stepper for navigation. The scores participants received during these steps are less indicative of a specific issue and more related to the fact that this is a new UI pattern that participants were unfamiliar with using. Similar to any new UI pattern introduced in the context of software and applications, the feature lacks predictability. Although the feedback from participants and relative scores from the other metrics suggest that the feature is sufficiently easy and usable, we don’t want to express confidence in these findings yet.
As with any new feature or functionality, it is highly recommended that more extensive testing be performed to increase the sample size and generate the kind of statistical significance that we can use to express confidence in our analysis.
The formula for success rate on task 4 is: = (23+(7*0.5))/32 = 83%
Although participants found submitting the report to be the easiest of the tasks. It was only one step. On that single step, half of the participants struggled (scored 0) to find the Done button.
The formula for success rate on task 5 is: =(4+(4*0.5))/8 = 75%
Filtering all the steps for those in which participants had the least success (differential score of -4 to -5), we are left with five specific steps that outline opportunity areas to prioritize improvement for future iterations before release.

The least successful steps according to SCS. The formula to calculate overall success rate is: =(223+(22*0.5))/248 = 94%
Ease of Use
To quantify ease of use, we opted for the single ease question (SEQ). After 3/5 tasks (Begin incident report, Complete report, Submit report) we asked users on a scale of 0–6, with 0 being very difficult and 6 being very easy, how difficult or easy this task was to complete. Since we have no personal benchmark from previous usability tests with which to compare our scores to, we reference the historical average of 5.5 (Sauro, 2012)

Graphical representation of individual SEQ scores with a combined average. As we can see from the chart above, our first task scored the worst in terms of ease of use with an average of 3.33. Although participants struggled just as much with completing and submitting the report, they did not view these aspects of the system to be as difficult. Completing a report received an average SEQ score of 5, and submitting the report received the historical average of 5.5.
Usability
You can’t adequately conduct a usability test unless you are testing for usability. There are a variety of industry-recognized usability scoring methods to select from, but the standard is still the System Usability Scale. This is a 10-question survey given after a test and the responses are then aggregated into a SUS score. The average SUS score from years of historical data is 68 (Sauro, 2013).
However, a 10-question survey is a little much to expect good feedback from participants at the end of a usability test. Instead, researchers have developed the Usability Metric for User Experience (UMUX). This is a 5-question survey developed as a more efficient means of generating a similar result. Yet, researchers at IBM went even further, researching the efficacy of the 5-question survey (Lewis, Utesch, & Maher, 2013). What they determined is that they can garner a similar feedback score from simply asking participants to rate their level of agreement with 2 positively framed UMUX statements:
This system’s capabilities meet my requirements.
This system is easy to use.
UMUX-lite 7pt. scale linear regression to SUS
If you ask participants to rate their level of agreement to these two statements on a 7pt scale, with 0 being completely disagreed and 6 being in complete agreement, you can then use a regression formula to transform these scores into a SUS score.
You can find these formulas in the Lewis et al. paper, but I first came across them on Quora, from Otto Ruettinger, Head of Product, Jira Projects at Atlassian (Ruettinger, 2018). In the post, he provided the formulas he uses in Excel to transform raw UMUX-lite scores to serviceable SUS scores.
In its raw format the calculation would be:
UMUX-L = ((a. /7) + (b. / 7))/2 x 100Which gives a range of 14 to 100.
And the SUS regression transform calculation would be:
SUS Score = 0.65 ∗ ((a. + b. − 2) ∗ (100/12))+22.9
Converting 5pt. to 7pt. scale for linear regression to SUS
When I showed my conversions to the other user researcher on my team, she noticed that I was using UMUX-lite on a 5pt. scale, and that my formula would have to be altered from above.
Instead of:
UMUX-L = ((a. /7) + (b. / 7))/2 x 100
it needed to be:
UMUX-L = ((a. /5) + (b. / 5))/2 x 100
As a result, I wasn’t confident in using the SUS regression to generate a SUS score.
Then I found an article on converting Likert scales(IBM Support, 2020). So a 5pt. to a 7pt. scale and vice versa.
What we end up with is: 0=0; 1=2.5; 2=4; 3=5.5; 4=7.

Likert scale transforms 5 to 7pt. With my scale transformed, I was able to implement the SUS regression formula and obtain the SUS score.
Putting it all together
This is the wonk stuff that nobody but other user researchers likely care about. What your product, dev team, and executives want to see is an “insights forward” summary. You can put this all together in a UX scorecard so that stakeholders can get a quick high-level overview of your analysis concerning your given metrics. These scorecards can help you settle debates, and get the whole team on board by clearly identifying priorities for your next sprint.

Example UX scorecard with grading scales for each metric Works Cited
IBM Support. (2020, 4 16). Transforming different Likert scales to a common scale. Retrieved from IBM Support: https://www.ibm.com/support/pages/transforming-different-likert-scales-common-scale
Sauro, J. (2012, 10 30). 10 Things to Know about the Single Ease Question (SEQ). Retrieved from MeasuringU: https://measuringu.com/seq10/
Sauro, J. (2018, 19 23). Building a UX Metrics Scorecard. Retrieved from MeasuringU: https://measuringu.com/ux-scorecard/
Lewis, J. R., Utesch, B. S., & Maher, D. E. (2013). UMUX-LITE — When there’s no time for the SUS. CHI 2013: Changing Perspectives, Paris, France, 2099–2102.
Nielsen, J. (2001, 2 17). Success Rate: The Simplest Usability Metric. Retrieved from NN/g Nielsen Norman Group: https://www.nngroup.com/articles/success-rate-the-simplest-usability-metric/
Ruettinger, O. (2018, 6 5). How is UMUX-L calculated in your company? Retrieved from Quora: https://www.quora.com/How-is-UMUX-L-calculated-in-your-company
Sauro, J. (2013, 6 18). 10 Things to Know About the System Usability Scale (SUS). Retrieved from MeasuringU: https://measuringu.com/10-things-sus/
The UX Collective donates US$1 for each article published in our platform. This story contributed to UX Para Minas Pretas (UX For Black Women), a Brazilian organization focused on promoting equity of Black women in the tech industry through initiatives of action, empowerment, and knowledge sharing. Silence against systemic racism is not an option. Build the design community you believe in. -
Use heuristic evaluations prior to usability testing to improve ROI
Catch low-hanging fruit with heuristics so that users can reveal deeper insights in usability tests

Photo by Scott Graham on Unsplash
ser experience research tends to break down into two broad categories, field studies and usability testing. Or, we might refer to this as need assessment and usability evaluation. Either way, heuristic evaluations will fall under the umbrella of usability methods. This method was invented by Nielsen and Molich (1990) and popularized as a means of discount usability evaluation, aimed at software startups that didn’t have the budget for real user research. Today, user research is more common, and usability testing is the gold standard. If you want to maximize your return on investment (ROI) for usability testing, you’ll want to perform a heuristic evaluation first. This article will explain what a heuristic evaluation is, how to do one, the pros and cons of this method, and why you should do it in lieu of usability testing to maximize the return on investment for both.In Nielsen’s own words:

Jakob Nielsen Defining ‘heuristic’
With that, let us simply define a heuristic as a usability principle or “rule of thumb”. Although when we refer to heuristics in terms of UX (rather than AI) we are talking about usability, a designer could theoretically employ the same process to judge a product’s compliance with the design system.
As an example, let us say you have an app that was designed without a system in place. Now your company is using a system based on Material Design. You go to the Material website and create a list of their guidelines with which to judge your UI’s compliance. Those guidelines can serve as your “heuristics”, at least in terms of the design.
Remember, the heuristics we are talking about in this article are for usability engineering.
Nielsen developed his heuristics in the early ’90s, distilling a list of nearly 300 known usability issues down to 10 overarching principles. And although they are still widely used today, many user researchers are beginning to develop their own heuristics that are more focused on modern technology and the issues related to it. We didn’t have the powerful mobile and smart technology back then that we take for granted today. The computing technology we did have wasn’t widespread and generalized enough for software companies to care about accessibility issues.
Nowadays, we have a variety of heuristic sets to choose from. For information on some of the more popular sets, refer to Norbi Gaal’s article, “Heuristic Analysis in the design process”.
In addition to the sets referenced by Norbi, there are a few other specialized sets worth noting here:
Developing heuristics
While developing your own heuristics may be encouraged, care must be taken when selecting appropriate principles. This is where prior user research can inform what heuristics are selected. What are their needs, preferences, pain points that you are trying to support and provide solutions to? Furthermore, and perhaps most importantly, you will want to pilot your heuristics in the same fashion as you would pilot your interviews, surveys, and usability tests.
Quiñones et al. (2017), describes a methodology for developing heuristics. This is an eight-step process through which researchers will:
- Explore: Perform a literature review.
- Experiment: Analyze data from different experiments to collect additional information.
- Describe: Select and prioritize the most important topics revealed from 1–2.
- Correlate: Match the features of the specific domain with the usability/UX attributes and existing heuristics.
- Select: Keep, adapt, create, and eliminate heuristics obtained from 1–4.
- Specify: Formally specify the new set of heuristics.
- Validate: Validate the heuristics through experimentation in terms of effectiveness and efficiency in evaluating the specific application.
- Refine: Refine and improve the new heuristics based on feedback from 7.
As you can imagine, this process isn’t a quick and dirty means of getting feedback, rather it’s an entire project in itself.
The Evaluation Process
A heuristic evaluation is what is referred to as an expert review. As with other expert reviews, a heuristic evaluation is intended to be a quick and dirty method to uncover issues cheaper than usability testing in terms of both time and money. If you’re not going through the process of developing a new set of heuristics as outlined above, the entire HE process should only take about a week, with the actual evaluation taking no more than a day or two. Instead of recruiting users to put your design in front of, you recruit 3–5 evaluators to review your design according to the chosen heuristics.

The heuristic evaluation process - Familiarize — If you have multiple evaluators (as you should!) then you are going to want them to devote some time familiarizing themselves with the heuristics you plan to use to conduct the evaluation. This is particularly crucial if you are also expecting them to validate a new set of heuristics.
- Evaluate — There are a few parts to this stage.
1. First, and let’s be clear: Your evaluators do not have intimate knowledge of your product. You should not be recruiting people who make design/implementation decisions on this product.
2. The evaluators got familiar with the heuristics, now let them familiarize themselves with the product. They should spend an hour or two navigating, clicking/tapping buttons, and understanding the basic patterns and flows the user experiences.
3. Heuristic evaluations are typically conducted in two passes. Each pass should be anywhere from 1–3 hours. In the first pass, evaluators holistically interact with the product and note any heuristic violations. In the second pass, evaluators do it all over again. They also retrace their steps and consider if any violations from the first pass are false alarms.
- Rate Severity — This step doesn’t have to be done on its own. Often evaluators will rate the severity at the same time they are noting the violation. They may go back on the second pass and change the severity ratings of previously noted violations. A standard rating scale comes from Jakob Nielsen, and looks like:
0: I don’t agree that this is a usability problem at all
1: Cosmetic problem — quick fix or ignore unless there’s time
2: Minor usability problem — low priority
3: Major usability problem — high priority
4: Usability catastrophe — must be fixed before release
- Synthesize and Prioritize Findings — At this stage, the evaluation is complete, and the analysis can begin. The evaluators come together and discuss their findings. Evaluators will create an aggregate list of all noted violations, discuss and identify potential false alarms, and agree upon severity scoring. If they are validating new heuristics, this is also the point at which they will be doing so.
- Converge on Design Recommendations — Based on a review of the prioritized findings, the evaluators will then brainstorm and converge on recommendations to solve the usability issues uncovered in the heuristic evaluation.
Why 3–5 evaluators
Depending on your particular circumstances and the given experience of the evaluators you have at your disposal, it may be possible to produce significant findings from a single evaluator. However, there are a few reasons for having multiple evaluators. Nielsen found through his own research on the method that single evaluators will only uncover about 35% of the issues present in the system (Nielsen, 1994). Furthermore, different evaluators tend to find different problems. From the curve shown below, Nielsen demonstrates that the optimal number of evaluators is 3–5. While you may uncover some additional issues by adding more than 5 evaluators, depending on how critical and complex the system to be evaluated is, there is a greater likelihood of overlapping issues found with that of other evaluators. In other words, there are diminishing returns in a cost-benefit analysis as shown below.

Source: Nielsen (1994) Curve showing the proportion of usability problems in an interface found by heuristic evaluation using various numbers of evaluators. The curve represents the average of six case studies of heuristic evaluation. 
Source: Nielsen (1994) Curve showing how many times the benefits are greater than the costs for heuristic evaluation of a sample project using the assumptions discussed in the text. The optimal number of evaluators in this example is four, with benefits that are 62 times greater than the costs. Pros and cons
As with any method, there are of course advantages and disadvantages. This list is derived from the literature found over at the Interaction Design Foundation (IDF): What is Heuristic Evaluation?
Pros:
- Evaluators can focus on specific issues.
- Evaluators can pinpoint issues early on and determine the impact on overall UX.
- You can get feedback without the ethical and practical dimensions and subsequent costs associated with usability testing.
- You can combine it with usability testing.
- With the appropriate heuristics, evaluators can flag specific issues and help determine optimal solutions.
Cons:
- Depending on the evaluator, false alarms (noted issues that aren’t really problems) can diminish the value of the evaluation (Use multiple evaluators!).
- Standard heuristics may not be appropriate for your system/product — validating new heuristics can be expensive.
- It can be difficult/expensive to find evaluators who are experts in usability and your system’s domain.
- The need for multiple evaluators may make it easier and cheaper to stick with usability testing.
- It’s ultimately a subjective exercise: findings can be biased to the evaluator and lack proof, recommendations may not be actionable.
Note the pro: “You can combine it with usability testing”. When you’re conducting a usability test, your prototype is your hypothesis. If you implement a heuristic evaluation correctly, you can catch and fix low-hanging fruit in terms of usability issues, thereby refining your hypothesis before you take it to users. Fixing these before testing allows your participants to identify usability issues from the first-person perspective of the persona, rather than recruiting users to find the kinds of issues that you should have caught yourself.
But let’s not forget to take note of the cons. False alarms as a result of issues found by an evaluator can be problematic and diminish the overarching results of the evaluation. This is yet another reason why multiple evaluators are crucial to making your heuristic evaluation worthwhile. False alarms can often be identified and disregarded when evaluators come together to synthesize and prioritize findings.
Conclusion
Heuristic evaluations are a mainstay of usability engineering and user experience research. Though considered a ‘discount’ method, there are a lot of upfront considerations in order to make the most of them. Using heuristic evaluations as a precursor to usability testing can help improve the return on investment for both, as every issue uncovered and solved with heuristics will allow your users to note other issues from their perspective. In sum, you are not your user, neither are your evaluators. Using heuristic evaluations in conjunction with usability testing will iron out a lot of the kinks before you show it to the user. With these issues already solved for, feedback from usability testing can generate deeper insights to really dial in the design, improving the ROI from both the heuristic evaluation and the usability test.
Sources
Bertini, E., Catarci, T., Dix, A., Gabrielli, S., Kimani, S., & Santucci, G. (2009). Appropriating Heuristic Evaluation for Mobile Computing. International Journal of Mobile Human Computer Interaction, 20–41.
Gaal, N. (2017, 06 19). Heuristic Analysis in the design process. Retrieved from UX Collective: https://uxdesign.cc/heuristic-analysis-in-the-design-process-usability-inspection-methods-d200768eb38d
Nielsen, J. (1994, 1 1). Guerrilla HCI: Using Discount Usability Engineering to Penetrate the Intimidation Barrier. Retrieved from NN/g Nielsen Norman Group: https://www.nngroup.com/articles/guerrilla-hci/
Nielsen, J. (1994, 11 1). How to Conduct a Heuristic Evaluation. Retrieved from NN/g Nielsen Norman Group: https://www.nngroup.com/articles/how-to-conduct-a-heuristic-evaluation/
Quiñones, D., Rusu, C., & Rusu, V. (2018). A methodology to develop usability/user experience heuristics. Computer Standards & Interfaces, 109–129.
Soedgaard, M. (2020, 07 19). What is Heuristic Evaluation? Retrieved from Interaction Design Foundation: https://www.interaction-design.org/literature/topics/heuristic-evaluation
The UX Collective donates US$1 for each article published in our platform. This story contributed to UX Para Minas Pretas (UX For Black Women), a Brazilian organization focused on promoting equity of Black women in the tech industry through initiatives of action, empowerment, and knowledge sharing. Silence against systemic racism is not an option. Build the design community you believe in. -
Creating a Lunar Analog Environment in A-Frame
As the resident UX researcher and human in the loop testing co-coordinator for CLAWS, it’s my responsibility to plan, facilitate, and analyze usability tests with real people to get feedback on our AR Toolkit for Lunar Astronauts and Scientists (ATLAS). Earlier this year, CLAWS participated in the NASA SUITS Challenge, the pandemic forced our school to close campus, including our lab. My test plan was scrapped, and although I scrambled to put together a fully interactive prototype that participants could click through on their computer, I wasn’t quite able to complete it in time.
In the coming school year, CLAWS has opted to conduct all collaboration and research activities virtually, including HITL usability testing. Having this pre-plan in place, I’ve begun thinking about how to get the most out of remote testing. First, unlike last year, I am pushing for a more agile and iterative design cycle.
Instead of spending months evaluating our own work before showing it to test participants, I am seeking to test once a month, beginning with a simple paper prototype that we can test remotely with Marvel App. Based on our findings from these tests, we can improve our design. With Marvel, you simply draw your screens out by hand, take photos of them, and then you can link them together with interactive hotspots for test participants to click through.
Initially, I had proposed Adobe XD as a means of putting together an interactive prototype for remote testing and demonstration purposes. With XD, designers have the capability of creating complex prototypes that compliment the modularity ATLAS requires. You can create components, and instead of having to create multiple screens to represent every interaction, you can create every interactive state of that component within the component itself! On top of this, XD allows designers to connect sound files to interactions. Sound files like this one:
PremiumBeat_0013_cursor_click_06.wav …which could be used to provide audio feedback letting the user know the system has accepted the user’s command.
Depending on how complex we want to get with our prototype, we could even test the implementation of our Voiced Entity for Guiding Astronauts (VEGA), the Jarvis-like AI assistant.
This will be a great way to test ease of use and overall experience before committing the design to code. However, I’ve also begun thinking about the best way to demonstrate our final deliverable to wider audiences. Even if we have a vaccine, it’s likely that a lot of conferences will still be held virtually. Furthermore, this is a big project, with a lot of students working on it, and we should have a final deliverable that showcases our work in an easily accessible format in order to feature it in our portfolio.
One of the possibilities I’m exploring is wiarframe. This is an app that allows you to set up your AR interface using simple images of your interface components.

The wiarframe design canvas Designers can also prototype a variety of look (gaze, stare) and proximity (approaches, reaches, embraces, retreat) gesture interactions where the component can change state, manipulate other components, even open a URL, call an API, or open another wiarframe interface. This ability to open another wiarframe could enable my team to prototype and link together the individual modules for the user to navigate between.
Wiarframe is really useful when it comes to AR on mobile devices. Less so when the AR is coming from a head mounted display (HMD). Because, to open a wiarframe prototype, users must download the mobile app, and then anchor the interface to a surface.
This is really fun, but there is no sense of immersion. Back at our lab, the BLiSS team created a near life-sized mockup of an ISS airlock with which to immerse test participants in a kind of analog environment. This is common for testing designs for human-computer interaction in space. It is still too costly to test designs on actual users in the context of spaceflight (Holden, Ph.D., Ezer, Ph.D., & Vos, Ph.D., 2013).
In order to get the best feedback out of remote usability testing, we’re going to need an immersive environment, it needs to be cheap and relatively easy to put together, and widely accessible so that we don’t constrain our recruiting pool such that we can’t find participants with the appropriate equipment to test with.
I believe these requirements can be met and our problems solved with A-Frame. A-Frame allows creators to make WebVR with HTML and Javascript, that anybody with a web browser can experience. What’s more, users can fully immerse themselves in the VR environment with a headset like Vive, Rift, Daydream, GearVR.
On top of this, as I was exploring what A-Frame could do through the Showcase examples, I came across a WebVR experiment by NASA, Access Mars. Using A-Frame, users are given the opportunity to explore the real surface of Mars by creating a mesh of images recorded by NASA’s Curiosity rover. Users can actually move around to different areas and learn about Mars by interacting with elements.

An image from Access Mars instructing users on how to interact with it. New to A-frame, I wasn’t really sure where to begin. Luckily Kevin Ngo of Supermedium, who maintains A-Frame, has a lot of his components available on Github. With limited experience, I was able to find a suitable starting environment, and with a few minor changes to the code, I developed an initial lunar environment.

Screenshot of the A-Frame lunar analog environment If you’d like to look around, follow this link:
https://mtthwgrvn-aframe-lunar-analog.glitch.me/
I’ll be honest there’s not much to see. Still, I’m excited about how easy it was to put this together. Similar to Access Mars, I’d like to develop this environment a little more so that users can do some basic movement from location to location. If we use this to test the Rock Identification for Geological Evaluation w.LIDAR(?) (RIGEL) interface, some additional environmental variables would have to be implemented to better simulate geological sampling. There are physics models that can be incorporated to support controllers which would allow for a user with one of the VR headsets mentioned above, to be able to manipulate objects with their hands. The downside of this is it would limit who we could recruit as a testing participant.
If nothing else, I want to be able to test with users through their own web browser. Ideally, they’ll be able to share their screen so I can see what they’re looking at, and their webcam so I can see their expression while they’re looking at it. While it’s not the same as actually being on the surface of the Moon, creating analog environments for simulating habitat design are relatively common at NASA (Stuster, 1996; Clancey, 2004; see also: NEEMO and BASALT). A WebVR environment as a lunar analog in which to test AR concepts follows this approach.
For usability scoring, we are using the standard NASA TLX subjective workload assessment as a Qualtrics survey to get feedback ratings on six subscales:
- Mental demand
- Physical demand
- Temporal demand
- Performance
- Effort
- Frustration
But testing aside, I also think WebVR is the best way to showcase our project as a readily accessible and interactive portfolio piece that interviewers could play with simply by clicking a link as we describe our role and what we did on the project. On top of this, with outreach being a core component of the work we do in CLAWS, an WebVR experience is ideal for younger students to experience ATLAS from the comfort and safety of their own home.
References
Clancey, W. J. (2004). Participant Observation of a Mars Surface Habitat Mission. Moffett Field, CA: NASA-Ames Research Center.
Holden, Ph.D., K., Ezer, Ph.D., N., & Vos, Ph.D., G. (2013). Evidence Report: Risk of Inadequate Human-Computer Interaction. Human Research Program: Space Human Factors and Habitability, 1–46.
-
The UX of Bioastronautics
Bioastronautics is a focus area of aerospace engineering that specializes in the study and support of life in space. This area of research spans the biological, behavioral, medical and material domains of living organisms in spaceflight. Increasingly, it’s also being applied to space habitat environments. And while the body of research spans decades, there is little information available regarding the user experience. I’d like to change that.

Space Exploration Initiative — Wikipedia Up until recently, the emphasis has been on pushing the bounds of what’s technologically possible and making it work. And to a large extent, this will continue to be true. However, we are on the precipice of a new frontier in which bioastronautics is open to the input of user experience research and design. To optimize the design for the users rather than train the users on how to use the design.
Below I’ve outlined several gaps in HCI research related to bioastronautics that NASA has identified as presenting a risk to astronauts.
From NASA’s 2013 Evidence Report: Risk of Inadequate HCI, research gaps include:
- Methods for improving human-centered design activities and processes
- Tools to improve HCI, information presentation/acquisition/processing, and decision making for a highly autonomous environment
- Tools, methods, and metrics which support the allocation of attention and multitasking for individuals and teams
- Validation methods for human performance models
Evidence collected in this report details contributing factors that are pertinent for the investigation by the HCI researcher. These include:
- Requirements, policies, and design processes
- Informational resources/support
- Allocation of attention
- Cognitive overload
- Environmentally induced perceptual changes
- Misperception/misinterpretation of the displayed information
- Spatial disorientation
- Design of displays and controls
I’m a graduate student studying Information Science at the University of Michigan and the Usability Testing Coordinator for CLAWS (Collaborative Lab for Advancing Work in Space). My role is as a UX/UI specialist involved in the research and design of ATLAS (Augmented Toolkit for Lunar Astronauts and Scientists) to compete in NASA design challenges, SUITS, and M2M X-Hab.
Bioastronautics research is still primarily engaged with human factors research dedicated to hardware and industrial design. The application of HCI is lacking, which is why the CLAWS team began actively recruiting from UMSI. The bulk of the team is composed of aerospace, mechanical and industrial engineering, as well as computer science majors.
To implement the human-centered design strategy, I would start by conducting an ethnographic study through participant observation and contextual inquiry with my team to better understand the culture of bioastronautics. Placing more emphasis on HITL as simulated usability testing, I’ll be seeking to validate our methods both in the BLiSS lab and remotely. Due to the COVID-19 pandemic and self-isolation, we’ve had to scrap my HITL plan and I’m currently in the process of adapting a prototype in XD for remote usability and heuristic testing. Below is a cursory view of the design.
https://xd.adobe.com/view/482cc044-b8d9-4893-40e6-4b75514adf7f-3e1d/
Interestingly, our self-isolation presents an opportunity to better understand the sort of issues astronauts will face in space. After all, astronauts on the Moon cannot conduct in-person meetings with ground control. This is specifically one of the target opportunities for HCI concerning the bioastronautics of space travel and exploration. Astronauts on future EVA missions will not be in constant contact with ground control as they have been up to now. Information systems, therefore, need to be designed to maximize autonomy and optimize information processing while simultaneously reducing cognitive load.
A pertinent example is the GeoNotes protocol we are currently working on. The Artemis generation astronauts are not geologists, save one. But they still need to be able to conduct high-quality lunar sampling and take sufficient field notes for planetary scientists back on Earth, so our task has been to design a geological sampling protocol that supports the needs of the Earth-based scientists as well as the autonomous astronaut.
Astronauts are cyborgs. They are the people for whom the term was coined. “For the exogenously extended organization complex functioning as an integrated homeostatic system unconsciously, we propose the term ‘Cyborg’.” — Manfred E. Clynes and Nathan S. Kline
I come from a background in Anthropology. Four field Anthropology. This is the common format of American Anthropology, and it proposes holism through an equal understanding of a person and groups of people by researching humans through biological, cultural, linguistic, and archaeological, or material contexts. What initially drew me to the field of Information is first and foremost, the interdisciplinary approach. Drawing on my background in Anthropology, I have a penchant for synthesis. Next, I came across a TedTalk by Amber Case, “We are all cyborgs now.”
Amber’s argument is that because we are storing whole swathes of our brains, creating alternate identities, and communicating with each other through digital technologies, we are all cyborgs now. I also hold this view.
Everything humans do regarding actually leaving Earth’s atmosphere and spending increasing lengths of time in space or on extraterrestrial bodies is in the realm of bioastronautics. All of that technology, from spacesuits to the shuttle, is concerned with supporting life in space. The body of research into the topic thus far has primarily centered around hardware and industrial or mechanical design and engineering. Increasingly, an emphasis on HCI needs to be made to close research gaps identified by NASA and provide adequate UX to end-users as humans seek to spread out and begin colonizing our solar system.
-
Case Study: Contextual Inquiry in the Grandmont Rosedale’s Vacant Property Task Force

GRDC landing page “Interwalla” is made up of 4 UX professionals from the University of Michigan’s School of Information MSI program. Their names are: Joanne Kim, Tianyue Yang (Maggie), Marcus Thomas, and Matthew Garvin (me)
Executive Summary
The Grandmont Rosedale Development Corporation (GRDC) serves to preserve and revitalize the Grandmont Rosedale communities of northwest Detroit through a wide range of community engagement programs. One of these programs is the volunteer Vacant Property Task Force (VPTF). The VPTF works with community members and external organizations to make sure that vacant properties in the GRDC’s neighborhoods are being maintained. However, the process by which the VPTF members document and report their work is unstructured and undocumented, lacking formal procedures. Interwalla’s objective was to examine the ways in which the VPTF currently research and report vacant property and make recommendations for better documentation procedures that the GRDC can adopt for the VPTF. This report details our research methodology, findings, and recommendations regarding these procedures.
Interwalla conducted background research and used the contextual inquiry method to uncover key information about the VPTF’s work process, including information about resources used and a sense of collaboration within the task force. We gathered data through interviews with six VPTF members as well as the GRDC’s community engagement manager, then analyzed our data to produce high-level findings. Some of these findings include:
- The VPTF members complete much of their work individually and thus use a variety of resources, methods, and tools to complete their work.
- The GRDC and the VPTF value themselves on their strong sense of community and value the community influence and impact they have achieved.
- The members see the VPTF as a group that can eventually disappear as there become less vacant properties; however, the task force seeks new members to become aware of and involved in the team.
With these findings, we make the following recommendations:
- An elevated digital presence through a VPTF webpage on the GRDC website
- Collaborate on an updated Vacant Property Toolbox handbook
- Improve fundraising efforts by using online crowdfunding
- Improve collaboration within the VPTF through co-design strategies
Introduction
Grandmont Rosedale Development Corporation
The Grandmont Rosedale Development Corporation (GRDC) is a non-profit, community-based organization working to preserve and improve the Grandmont Rosedale Neighborhoods of northwest Detroit. For the past 30 years, the GRDC has taken a comprehensive approach to community revitalization, with programs designed to renovate vacant homes, assist local homeowners and businesses, beautify the community and keep their neighborhoods safe and vibrant.
The Vacant Property Task Force
The Vacant Property Task Force (VPTF) is one such program that works with community members, meeting regularly to strategize ways to combat property vacancy and blight. The VPTF is comprised of volunteer residents from the five GRDC communities. Members of the VPTF monitor vacant homes in Grandmont Rosedale to ensure that every property is being maintained. Much of their work involves tracking down property owners, reporting vacant homes to the city, and assisting homeowners who are facing tax and blight issues. Members also make sure that vacant homes are being physically maintained by performing tasks such as cleaning the yard and cutting the grass for the vacant homes.
Project Goal
While the VPTF works with community members to make sure that vacant properties are being maintained, the process by which they complete this work is unstructured, lacking formal practice and procedures. Information is maintained mostly through word of mouth. Some of the steps require the submission of information through city websites and apps. And while some of the members are tech-savvy, others struggle with these technologies. To this end, Interwalla conducted research and analysis through contextual inquiry to analyze the current process, suggest improvements, and make recommendations for optimizing documenting procedures that the GRDC can share with task force members and the general public.
Background
The Vacant Property Task Force, or VPTF, is one of the nation’s most effective neighborhood volunteer organizations, working to preserve and improve the Grandmont Rosedale community of neighborhoods in Northwest Detroit. More impressive still, is the underlying fact that the VPTF was founded in response to the housing market crash of ’07, and the founding members had no experience to guide them through these turbulent times. They just rolled up their sleeves and got to work. Over the years, there has been a reduced need for the task force, which in part serves to speak volumes on their impact and effectiveness. And yet, always looming on the horizon is the threat of another economic downturn.
Our research has shown that as the years go by, new volunteers are few and far between. Who wants to join a vacant property task force if it doesn’t feel like a pressing issue? Further compounding this fact, are barriers to entry. Some of the more seasoned volunteers don’t respect the input from newer volunteers who weren’t around when the issues addressed by the VPTF were at its peak. In some cases, instead of passing on the knowledge and experience, they have accrued over the past decade; the more experienced volunteers often prefer to continue to do the work themselves rather than explain how to do it to someone else.
The challenge presented by the GRDC is to optimize documenting procedures so that if and when another economic crisis affects their community, volunteers can be quickly onboarded and mobilized to educate and protect the community from tax foreclosure and the encroaching vacant property and blight issues it brings with it. In the meantime, a resource guide is sought to provide useful tips and guidelines on how the general public can carry out some of this work on their own.
Utilizing data from our background research and contextual interviews, Interwalla constructed an affinity wall to better understand the synergy throughout seemingly disparate pieces of information, to find the common thread that binds every one of the stakeholders we interviewed not only to their neighborhood but to each other. In this respect, our team’s mission has been to provide subtle, yet high impact information solutions, that if implemented, could have significant positive reverberations throughout the entire GRDC.
Methodological Overview
Contextual Inquiry
Interwalla followed the user-centered design processes primarily utilizing contextual inquiry (Holtzblatt, Wendell, & Wood, 2005). Contextual inquiry is a semi-structured interviewing methodology used to obtain information about the context of use. Users are first typically asked a set of questions, followed by observations and further questioning as they work in their own environments (Herzon, DeBoard, Wilson, & Bevan, 2010).
Because of the nature of the VPTF work, or more specifically a lack thereof, Interwalla adapted the process and conducted a more expansive standardized interview in which we had users walk us through specific recent experiences to make up for our inability to directly observe the work process. Our aim was to gather rich detail about work practices as well as social, technical, physical environments, and user tools. Contextual inquiry is based on a set of principles that make it adaptable for a range of different situations. This technique is generally used at the beginning of the design process and is a reliable method for gathering the kind of information we sought.
According to Herzon et. al., the four principles of contextual inquiry are:
- Focus — Plan for the inquiry, based on a clear understanding of overall purpose.
- Context — Go to the user’s environment and observe them do their work.
- Partnership — Engage with the users to reveal unarticulated aspects of work.
- Interpretation — Arrive at a shared understanding with the users about the aspects of work that matter.
Contextual inquiry is most useful in defining requirements, process improvement, learning what’s most important to those involved, and informing future projects.
Background Research
In order to achieve focus and plan for the inquiry, each member of Interwalla conducted distinct background research to establish a generalized profile regarding the problem, the client, the sector, and organizational issues as they pertain to the implementation of information systems. This background research was crucial in informing our team before heading into interviews and observations to gather context.
Participant Observation
Matt conducted a participant observation session as a representative of Interwalla at the VPTF monthly meeting held on October 15th. Participant observation is a qualitative method with roots in traditional ethnographic research. Participant observation is precisely what it sounds like, the researcher not only observes the activity but they also participate right alongside the group they are observing. This method builds trust and adds depth to the researcher’s insights while clarifying observer bias through self-reflection.
Contextual Interviews
Our interview participants were selected with assistance from our client. We were provided with six individual stakeholders and sat down with the Community Engagement Manager for a total of seven individual interviews. Although the VPTF as a volunteer organization officially has a flat hierarchy, meaning each member has no authority over another, we were presented with a range of subjects from founding members to newer members, the VPTF “Chair”, two members of the GRDC board of directors, and the Community Engagement manager. This range of stakeholders provided Interwalla with a significant cross-section of roles within the program, and their relationship to the greater organization, yielding representational insights and adding depth to our inquiry.
The interviews themselves focused on three primary topics. We endeavored to learn, from each stakeholder’s perspective, about the task force, the tasks, and the environment in which these occur. Being that the Grandmont Rosedale community of neighborhoods is comprised of five distinct neighborhoods, we sought to learn more about these neighborhoods and the community directly from the residents who have made a commitment to their preservation.
Artifact Survey
Pertinent to our research was a survey of used artifacts, both physical and digital. In the client brief we learned that while some of the task force members are tech-savvy, others struggle with digital technologies. We were also made aware that there were communication and organizational gaps, as well as tensions between some of the long-standing members and newer members with new ideas. Any viable recommendation on our part had to consider what kind of tools and technologies each individual user was familiar with, and the extent to which they could benefit from the digital solutions we had to offer. Moreover, we collected a trove of documents that served as previous, less formal incarnations of the type of guide the GRDC is seeking help with creating.
Affinity Wall
The affinity wall was our primary source for data analysis. It was derived from the KJ Method developed by Japanese ethnologist, Jiro Kawakita. This method was developed in response to difficulties assembling complex ethnographic data into a coherent story yielding insights into the people the research was being conducted on (Scupin, 1997).
As a team, we broke down the interviews in single “affinity notes”, then poured over them looking for meaningful clusters. As we put these together, we came up with a sentence to describe the common thread that made these clusters meaningful and put these on blue sticky notes. Then we studied the blue notes closely and where we found meaningful clusters, we labeled an orange note with a description of the common thread. We found some of these orange notes also had and common thread and we thus labeled a green note with the overarching similarity between them. In this manner, we assembled something of a pyramid which tells the tale of the GRDC, the VPTF, and the community in which they reside and serve.

The completed affinity wall Findings and Recommendations
Overview
We derived several important findings through our background research, artifact survey, and affinity wall analysis. The VPTF has been so effective that it is on the verge of dissolution. And yet the members of the VPTF and their experience have become integral to the past, present, and future work of the GRDC, that a transformation of volunteer roles may be in order as the VPTF revises their mission.
Our goal was to analyze documenting procedures and provide recommendations for optimizing these procedures to carry the processes and experience the early members had developed, into the future. Our research suggests that the VPTF should focus on the deployment of a website and updated vacant property toolbox in order to document and preserve the processes of the VPTF, updating as needed as a continued resource and model for new volunteers, the general public, and other communities.
The VPTF Needs an Elevated Digital Presence

GRDC Homepage While the GRDC operates a website, programs like the VPTF get little exposure as they cannot be seen on the header menu, and info related to these programs cannot be found until scrolling halfway down the homepage. However, contact us not only appears in the header menu, but also center stage of the initial loading screen. Our primary finding is that the VPTF Program needs a webpage. From the initial client brief, first meeting, and through the stakeholder interviews and affinity wall analysis, what we learned is that the VPTF appears ready to create a home for itself on the internet.
Evidence:
- At the first meeting with the client, Interwalla was presented with a number of pamphlets and flyers from over the years that were used to distribute to neighbors and new residents. Several stakeholders referencing these artifacts suggested that they see an updated version of these documents as a website.
- According to our interviews along with recent events in the news, we were made aware of the interest in using VPTF processes as a model to roll out in other Detroit communities.
- With less need for work to be done reporting vacant property, the work of the VPTF has become documenting the processes and optimizing the format so that it can be utilized by newcomers.
Recommendation: VPTF webpage
When we go to grandmontrosedale.com we are presented with a responsive, and well- designed website that looks great on mobile and computer web browsers. What it’s missing are pages for the various programs facilitated by the GRDC. Given the objective of the project, combined with the interest to roll out the VPTF’s efforts as a model for the rest of the city, Interwalla finds this recommendation pertinent to elevating VPTF awareness and accessibility amongst the general public.
In this case, a webpage would also serve as a living resource and archive of past and current documentation of processes and guidelines utilized by the VPTF. The GRDC website already has a templated design, hosted on WordPress. That means that a lot of the work is already done and adding a new page to the site should be relatively easy for the site’s webmaster, Loudbaby. For a VPTF webpage, volunteers should come together and collaborate on what content it should contain. In addition, we devised a means of increasing the visibility of GRDC programs.

Increase GRDC programs visibility and notoriety by giving them their own webpage on the GRDC site 
GRDC Homepage mobile There is a caveat here, in that on a mobile browser, only the Support Us button is visible:
This is important to consider, because a mobile browser is how most people access and view the internet. To account for this, Interwalla advises either a) making the contact button dominant over support and adding a donation banner somewhere across the top or if possible,
b) making both buttons visible on mobile devices.
An Updated Vacant Property Guide
The primary concern the GRDC presented us with was the need for an updated guide. This was repeated numerous times throughout the course of the interviews. As part of our artifact survey, we took into consideration the previous incarnations the VPTF had created over the years. While a webpage could serve as a digital VPTF guide that the GRDC is seeking, we found that this digital solution would be of little benefit to those who aren’t as tech-savvy.
Furthermore, a physical guide is something that can be distributed to new residents and volunteers, passed out at community events, or utilized by other neighborhood associations.
Evidence:
- During our first meeting with the GRDC, Interwalla was presented with several resources and bulletins used by the VPTF members. We learned that most of these resources were not only undated but outdated as well.
- A common resource used by many members of the VPTF is the Vacant Property Toolkit handbook provided by Detroit Community Resources and the Detroit Vacant Property Campaign in association with the University of Michigan Taubman College of Architecture and Urban Planning. This resource is also outdated and includes names and contact information that are not valid anymore.
- Some members of the VPTF do not use digital technology.
- Volunteers expressed a desire to have a guide to distribute to new residents.
Recommendation: A GRDC branded Vacant Property Toolkit (long-term)
We recommend that the VPTF work collaboratively to establish branded physical guides. In web design, we structure information formatted for mobile viewing first, as that is how most people will see it. This causes designers to establish an information hierarchy, thinking hard about what content is most important for people to access and proceeding from there. We found that this same approach would prove useful for updating printed materials to be used as a guide and resource manual.
In the short term — A Business Card:

Business card mockup with key links on back As an example, we have included a mockup of a business card that serves as a physical manifestation of the kind of mobile-first design considerations described above. The card serves as a quick and easily distributed resource that shares primary contact information and links to the tools and resources the VPTF makes the most use of. A card such as this can be distributed to new residents, displayed at local businesses, passed out at community events, or carried by volunteers and handed out at a moment’s notice should the need arise.
In the short term — A Brochure:
Expanding on the information contained on the card, a trifold brochure could be printed and distributed in much the same fashion. The idea here is that the brochure would contain the same information as the card with the addition of the next most important information as determined by the VPTF. Referring to our web design analogy, a brochure is akin to designing for a tablet.
For the long term — An Updated Vacant Property Toolbox:
Further expanding on the information contained in the brochure, we arrive at an updated toolbox. This toolbox need not be created from scratch. Our recommendation is to review previous incarnations of the Vacant Property Toolbox to determine what information is outdated, and what is still salient. The task force is comprised of members who know what information is useful and what can be discarded. The toolbox should be more expansive and provide a level of detail on par with a training manual. As always, we should consider the audience we are writing for. The focus of the toolbox should not only document the methods and resources used by the VPTF, but also what the VPTF does to make their contributions to the community so effective. In coordination with our previous recommendation, we advise that a digital version of the toolbox be preserved so that new editions don’t have to be created from scratch, but rather can be revised as time goes on. Outdated versions can still be located as an archive on the VPTF webpage, and current versions can be viewed on the web, or downloaded as a pdf document and printed out for distribution or personal use. There is an advantage to this undertaking, in that the GRDC can make use of company branding for greater exposure, further elevating their status and the impact of the VPTF’s work, particularly as the toolbox is utilized in other Detroit neighborhoods and beyond.
The VPTF Can Improve Fundraising Efforts
VPTF Fundraising is central to strategic planning at the GRDC, but many of the past fundraising projects have been a high effort for little to no return.
Evidence:
- The VPTF meeting on October 15th focused primarily on soliciting ideas from members regarding fundraising. We learned that while the VPTF is central to strategic plans at the GRDC, many of the fundraising efforts in the past have been a high effort for little return.
- One member mentioned that often residents would prefer to just give money to the organization rather than buy something like a holiday wreath, which was still the most successful fundraiser to date.
- Another member suggested a holiday movie screening. Another idea was a mobile display that could be set up at various community events, staffed by volunteers, and used to simply ask for donations.
- Some of the stakeholders confirmed in their interviews that much of the VPTF work lately is fundraising in order to buy supplies like boards and equipment. Additional commentary reiterated that the VPTF fundraisers have typically been a lot of work for little return and that every year goes by, everyone gets older. The VPTF is looking for more efficient, less physical ways to raise money.
- In addition, in recent years there has been less need for boarding up houses, and more of the work is just researching and reporting vacant property or code violations either to the owner or management firm, and then following up with the city if necessary to issue a ticket to spur action.
Recommendation: Crowdfunding
“Crowdfunding is the best way to expand a nonprofit’s donor base.”

Stats courtesy of https://www.mobilecause.com/crowdfunding-for- nonprofits/ This recommendation was mentioned in the VPTF meeting on 10.15, and Interwalla agrees that this would be the best move going forward in order to optimize fundraising efforts utilizing information technology. Crowdfunding leverages the power of social networking by engaging connections made not only by the organization but also the connections of the organization’s members.
What sets crowdfunding apart from more traditional donation pages is the more personalized touch. Crowdfunding often comes with pictures or short videos that highlight the impact of the organization or in some instances, illustrate the problem that needs solving.
Crowdfunding is the best way to expand a nonprofit’s donor base. Similarly, peer to- peer fundraising may also be utilized. This is essentially the same as crowdfunding, but it puts the volunteers in control of producing and promoting the campaign, which can either be ongoing or have a set deadline/amount.
The VPTF Can Improve Collaboration
Any team or organization can benefit from improved communication and collaboration, and the VPTF is no different. In our recommendations above, we elaborated on our key findings and provided recommendations on what the GRDC and VPTF could do to optimize documentation and reporting procedures. What follows is a recommendation for how. The affinity wall analysis alone yielded rich data and several key findings.
Evidence:
- First, Matt observed communication and collaboration issues at the VPTF meeting despite concerted efforts to manage and facilitate discussion and solicit ideas.
- Some members talk over others and don’t respect other people’s ideas.
From the stakeholder interviews and affinity wall analysis we learned:
- Most of the traditional VPTF work can be conducted alone or with a partner.
- Overwhelmingly, the most consistent sentiment among stakeholders was a sense of pride in the impact the VPTF has had on the community and the strong sense of identity that came with being a part of that.
- While everyone agreed that the overarching mission of the VPTF was to not be necessary anymore, there was a consistently expressed desire to perhaps reevaluate the mission of the group rather than dissolve it altogether.
- Divergent ideas about where that might lead followed.

Design thinking is an iterative approach to problem-solving Recommendation: Co- design
You don’t have to be a designer to benefit from design thinking. Design thinking strategies are highly effective problem- solving strategies that increasingly, are being employed with great success in a range of industries both in the public and private sectors, in particular nonprofits.
For more information, see: https://www.nten.org/article/design-thinking-a-powerful-tool-for-your-nonprofit-0/
Co-design is very similar to participatory design, which advocates for changing not merely the systems, but the practices of system-design and building, to support democratic values at all stages of the process. “From participatory design, we draw several core principles, the reflexive recognition of the politics of design practice and a desire to speak to the needs of multiple constituencies in the process” (Sengers et al., 2005).
Co-design differs from participatory design in that it asserts that users can design the solution for themselves. There are two potential pitfalls in this approach that any organization adopting a co-design process needs to be aware of.
- When the designer falls back into more of a support role, the result is often design by committee. To counter this, clear leadership is required in order to keep focus and make tough, holistic design decisions.
- If co-design is used as research, while quite effective, it’s research and not co-design.
Users can, however, participate in and employ the iterative strategies designers use to come up with and build on ideas, and co-design is a strategy that is proven to work at scale, from international campaigns, to open source projects, and even in small team environments and agency work (Casali, 2013).
In fact, the GRDC building has a fantastic space with which to facilitate neighborhood workshops (something GRDC already does) or design jams to ideate innovative solutions to neighborhood problems. In this respect, the GRDC would be utilizing community involvement through resident and volunteer participation that strengthens the social fabric of the neighborhoods and reinforces the GRDC as the premier conglomeration of the Grandmont Rosedale community at large. The neighborhood organization works best when residents and volunteers seek to maximize interdependence and participation within the community (Huggins, 2002). While a participatory design strategy was beyond the purview of our project, it is our recommendation that a GRDC facilitated co-design process is implemented in the long term as a means of community-based input in the ongoing design process, implementation, and management of data and information systems now and in the future.
“How Might We?”

IDF provides world-class free resources on design thinking You may be asking, “Great, but how might we get started?” (Get it?) The good news is, you’ve already begun. Design Thinking sounds new and different, but it’s simply a set of iterative techniques that you can employ in any team environment to achieve better results by putting the structure of the problem-solving methods used by innovative organizations around the world to work for you.
In the design process, we do research in order to understand user needs and define the problem. In co-design, because the users are the designers and the problems the organization wants to focus on are already defined, Interwalla recommends beginning with, “How might we…?” This is a simple, yet powerful rephrasing of the established problem that opens the floor to explore a range of possibilities uncovered in the ideation phase.
- How: We ask “how” because we don’t yet have the answers we seek. Beginning with “how” helps participants explore a variety of possibilities instead of diving straight into what we think the solution should be.
- Might: The usage of “might” is important as it emphasizes that our ideas are only possible solutions and that we shouldn’t be too attached to the initial ideas that spring to mind.
- We: “We” is critical to the overall co-design strategy as it immediately implies and reinforces that this is a collaborative effort and that the solution will be found through teamwork.
According to the Interaction Design Foundation (IDF):
“How Might We” (HMW) questions are the best way to open brainstorm and other ideation sessions where you explore ideas that can help you solve your problem. By framing your problem as HMW questions, you’ll prepare yourself for an innovative solution.
For more information please see: https://www.interaction-design.org/literature/article/define-and-frame-your- design-challenge-by-creating-your-point-of-view-and-ask-how-might-we
Brainstorming

Good brainstorming is at the heart of innovation Brainstorming is a well-known and commonly used activity employed by teams within organizations to generate a bunch of ideas to solve a problem. But a lot of brainstorming sessions are unstructured and ultimately fail to achieve the optimal results.
Brainstorming is a useful tool at any point in a design or work process and is often utilized throughout. As an example, for this project, we brainstormed interview questions and used the interview data to brainstorm problem statements in order to brainstorm ‘how might we’ questions that we then brainstormed answers to.

Often referred to as the “double diamond” this pattern of diverge-converge will often repeat several times over the course of a project. At any stage of the design thinking process above, when you need to generate ideas to solve a problem or challenge, the goal should be to generate many ideas that diverge from one another. You then take these ideas as Michelangelo takes a block of marble and whittle away at the superfluous pieces until you reveal the masterpiece hidden inside.
We recommend the co-design strategy as a means of optimizing a website and guidebook content while reinforcing group cohesion and community interdependence. Soliciting ideas from VPTF volunteers are a major part of team meetings. Taking a closer look at how to develop HMW questions and brainstorming sessions can make these meetings more productive and fruitful.
Over the years, some of the most innovative design thinking experts from the world-famous IDEO and Stanford’s d.school have developed best practices8 that GRDC can implement to provide structure to ideation sessions and meetings including but not limited to the VPTF.
- Set a time limit
It’s important to set aside a specific period in which everyone in the group operates in brainstorm mode.
The facilitator needs to stress and enforce the importance of prohibiting judgment and keep the group focus on generating as many ideas as possible.
The worst possible ideas are encouraged!
- Stay focused on the topic
Brainstorming should always address a specific question.
Attempting to address multiple questions in a single session doesn’t work.
How Might We…? tend to be the best questions.
- Defer judgment or criticism, including non-verbal
Brainstorming sessions are not the time to judge or criticize.
It’s crucial that participants feel confident and safe to put forward wild ideas.
The best ideas come from those who dare to be different.
- Encourage weird, wacky, and wild ideas
At best, you get an incredibly innovative solution.
At worst, you get an idea you don’t use.
Wild ideas often give rise to creative leaps.
- Aim for quantity
The more ideas, the better chance you have to innovate.
- Build on each other’s ideas
Brainstorming works well when participants build on each other’s ideas.
Our minds are highly associative, and one thought can trigger another.
Building on each other’s ideas helps participants get out of their own thinking structures when they can’t come up with anything else.
- Be visual
At UMSI, we are particularly fond of Post-Its.
Use sketching or models to visualize your idea.
Act it out. (bodystorming)
- One conversation at a time
You’re here to ideate together.
Don’t obsess over your own idea.
After time expires or all ideas are presented, select the best ideas through various methods like “Post-It Voting”, or “Bingo Selection”.
You can take the best ideas and build on them in further brainstorming sessions.
For more information please see: https://www.interaction-design.org/literature/topics/brainstorming
Conclusion
Our research and analysis yielded key information regarding the VPTF’s procedures, concerns, and team collaboration. We translated these findings into clear, executable recommendations that the GRDC can implement both short term and long term.
In conclusion, our recommendations address most of our key findings. The VPTF web page provides the task force with an elevated digital presence and a hub for past and present documentation by the members. This is pertinent given the VPTF’s desire for greater publicity and new member involvement. Given VPTF’s history of fundraising efforts, we suggest crowdfunding as an effective way to expand VPTF’s donor base and improve fundraising overall. Due to expressed desire to revisit and perhaps change the VPTF’s mission, as well as a strong sense of community and identity within the VPTF, we suggest co-design strategies foster greater collaboration in the team’s decision-making process. Lastly, we suggest that these strategies be used to optimize a team-wide effort to update the VPTF’s outdated documentation. We hope that our recommendations will meet the needs and concerns of the Vacant Property Task Force and fit within the larger mission of the Grandmont Rosedale Development Corporation.
Bibliography
Battersby, L. (2017). Co-creation Methods: Informing Technology Solutions for Older Adults. Human Aspects of IT for the Aged Population: Aging, Design and User Experience, 77–89.
Goddeeris, A. (2014). Securing Neighborhoods. Agora Journal of Urban Planning and Design, 110–118.
Herzon, C., DeBoard, D., Wilson, C., & Bevan, N. (2010, 01). Contextual Inquiry. Retrieved from Usability Body of Knowledge: http://usabilitybok.org/contextual-inquiry
Heugens, P., & Drees, J. (2013). Synthesizing and Extending Resource Dependence Theory: A Meta-Analysis. Journal of Management, 1666–1698.
Holtzblatt, K., Wendell, J. B., & Wood, S. (2005). Rapid contextual design a how-to guide to key techniques for user-centered design. San Francisco, CA: Elsevier/Morgan Kaufmann.
Huggins, M. (2002). Volunteer Participation in Urban Neighborhood Organizations: An Exploration of Individual and Contextual Characteristics. East Lansing: Michigan State University Dept. of Resource Development.
Massey, P. A. (2019, 06 04). City hires new director to help close Detroit’s digital divide. Michigan Chronicle, 82(38), B6. Retrieved from https://proxy.lib.umich.edu/login?url=https://search-proquest- com.proxy.lib.umich.edu/docview/2253095182?accountid=14667
Nonprofit Finance Fund. (2019, 11 11). Grandmont Rosedale Development Corporation. Retrieved from GuideStar: https://www-guidestar-org.proxy.lib.umich.edu/profile/38-2885952
Scupin, R. (1997). The KJ Method: A Technique for Analyzing Data Derived from Japanese Ethnology. Human Organization, 233–237.
Sengers, P. (2005). Reflective Design. Proc. 4th Decennial Conference on Critical Computing, (pp. 49–58).
Wastell, D. G. (1999). Learning Dysfunctions in Information Systems Development: Overcoming the Social Defences with Transitional Objects. MIS Quarterly, 23(4), 581–600. Retrieved from https://misq.org/cat-articles/learning-dysfunctions- in-information-systems-development-overcoming-the-social-defenses-with-transitional-objects.html
Wells, M. A. (2009). Perceptions of Knowledge Gatekeepers: Social Aspects of Information Exchange in an Organization Undergoing Change. Sydney: University of Western Sydney School of Management. Retrieved from https://researchdirect.westernsydney.edu.au/islandora/object/uws:7822/datastream/PDF/download/citation.pdf





