Tag: Technology

  • AR Tools for Lunar Sampling

    UI design

    Overview

    The AR Toolkit for Lunar Astronauts and Scientists (ATLAS) is an information system that was conceived during my time working in a software engineering lab at the University of Michigan.

    I led the UX Research, rapid prototyping, and human-in-the-loop testing for an augmented reality information system designed for use in the Artemis generation xEMU spacesuit during lunar EVAs, as well as VEGA (Voice Entity for Guiding Astronauts) a Rasa-based conversational AI.

    ATLAS won the 2020 NASA SUITS Challenge, a $10k EPIC MegaGrant, and became the foundational system upon which the CLAWS lab continues to make leading edge technological advances to enable long duration human spaceflight.

    A video walkthrough of the AR Design Guide I prepared for the other UX designers and software engineers at CLAWS.

    This work was also utilized in the NASA Exploration Habitat (X-Hab) Challenge in collaboration with the Bioastronautics and Life Support Systems Lab at the University of Michigan.

    The goal of ATLAS and VEGA are to assist astronauts in cognitively demanding fieldwork. Because of the nature of the work, I guided the UX toward emphasizing non-intrusiveness, adaptivity, and situational awareness.

    This project led to:

    Further involvement with BLiSS, leading human-centered design to adapt VEGA for NPAS, the NASA Platform for Automated Systems, in collaboration with NASA’s Autonomous Systems Laboratory.

    An internship with NASA’s Exploration Medical Capability team where I worked across internal systems as a Human Factors Engineer and UI Architect to advance medical systems for long duration human spaceflight.

    My thesis research, which synthesized these experiences to better understand the perception of human-centered design among tech-centered engineers designing systems for human spaceflight and the implications for designing the Future of Work on Earth and in Space.

    Finally, my journey at NASA concluded with a tour of the aerospace side on the Convergent Aeronautics Solutions team to deliver human-centered design evangelism in support of advanced urban air mobility (eVTOL, drones, etc.) over the summer prior to entering the PhD program at the University of Michigan School of Information.

    During COVID, I created a test environment in WebVR. This led to my aFrame contributor credit on GitHub!

    Publications & Outputs

  • Culture in the Loop: Machine, Justice, and Myth in Speculative Futures

    AI is not simply a product of culture. Rather, it is an active participant in cultural creation and change. What follows is a speculative inquiry to explore how intelligent machines co-create, transform, and even inherit culture. Contained in this article are imaginative futures where justice, material lineage, and myth is centered in this process. Five interwoven themes guide our journey: machine-mediated cultural evolution, material and labor infrastructure, generative justice and value flows, myth, ritual, and algorithm, and speculative counter-designs. Throughout, I draw on insights from human-computer interaction (HCI), science and technology studies (STS), media theory, critical race studies, and speculative design as a provocation to envision equitable and meaningful human-AI futures.

    Machine-Mediated Cultural Evolution

    Intelligent machines are increasingly entangled in the evolutionary processes of culture. In classic cultural evolution theory, variation, transmission, and selection are key forces that drive which ideas and practices emerge and endure (Brinkmann et al., 2023). Today’s AI systems – from generative models to recommendation algorithms – amplify and transform each of these forces. Variation is supercharged by generative AI that produces endless novel images, texts, and music, introducing new cultural “mutations” at an unprecedented rate. Transmission is mediated by algorithms: what we see, hear, and share on digital platforms is filtered and personalized by recommender systems that alter traditional patterns of social learning. And selection of culture is increasingly driven by machine criteria (click-through rates, trending metrics) rather than solely human choice. As one group of researchers puts it, we are witnessing the rise of “machine culture” – culture mediated or even generated by machines, with recommender algorithms and chatbots serving as new cultural agents (e.g. as taste-makers or imitators of human conversation). Brinkmann et al. (2023) define “machine culture” as the cultural evolution that emerges within and through algorithmic systems themselves, encompassing the ways intelligent machines participate in variation, transmission, and selection of cultural traits. Rather than simply reflecting human culture, machine culture is constituted through recursive interactions between human and artificial agents across digital ecosystems. Figure 1 illustrates these three deities of AI.

    A stylized triptych illustration depicts three contemplative figures in robes, each symbolizing different domains of knowledge. The left figure is surrounded by geometric shapes and abstract symbols, evoking mathematics and logic. The center figure, shaded in blue, contemplates with a hand over the mouth, framed by circuit-like patterns and binary code, representing computation and artificial intelligence. The right figure is adorned with foliage and icons of drama masks and books, symbolizing literature, the arts, and nature. Each figure has circuit-like patterns on their heads, blending ancient wisdom with modern digital consciousness.
    Figure 1 The three deities of AI.

    Are we seeing the birth of a new cultural species in AI, or simply an accelerated evolution of human culture? Optimists might argue that AI is a creative partner, expanding the palette of human expression. For example, generative art tools co-create with artists, and strategy AIs like AlphaGo have introduced unheard-of moves in Go, which human players then incorporate into their play. Pessimists might note that AI often just remix human data, reflecting an “accelerated” (and sometimes distorted) version of our own traits rather than truly autonomous culture.

    Either way, culture in the age of AI evolves on new tempos. Memes rise and fall in days; niche art styles spread globally via algorithms; micro-genres of music proliferate from algorithmic recommendations. AI is proving to be more than a mere product of culture – it is now a driver of cultural evolution, generating new variations and influencing what cultures converge upon. In sum, the feedback loop of human and machine has tightened: we are co-evolving. This demands that we study and steer these cultural dynamics with care, lest we inadvertently favor a narrow, homogenized “algorithmic culture” at the expense of human cultural diversity.

    A richly textured collage-style illustration made from cut paper depicts robots, humans, and cultural artifacts interconnected by threads. On the left, three humanoid robots hold and manipulate strands that weave into a central loom, symbolizing creation and storytelling. The threads extend to icons of theater masks, an open book, a smiley face in a chat bubble, human silhouettes of various ages, and a guitar—representing literature, emotion, communication, identity, and music. The artwork evokes themes of co-creation between machines and humans in shaping culture, memory, and expression through both ancient and modern technologies.
    Memes rise and fall in days; niche art styles spread globally via algorithms; micro-genres of music proliferate from algorithmic recommendations. AI is proving to be more than a mere product of culture – it is a driver of cultural evolution”

    Brinkmann et al. (2023), building on Henrich’s foundational work in cultural evolution, extend the theory to include intelligent machines as participants in evolutionary dynamics once thought to be uniquely human. Whereas Henrich (2015) emphasized how evolved psychological adaptations such as imitation, teaching, and prestige bias enable cumulative cultural transmission among humans, the newer framework considers how algorithmic agents mediate and even generate cultural variation, transmission, and selection. This move does not discard human cognition as central, but rather acknowledges that AI systems now shape the conditions under which cultural evolution unfolds.

    Material and Labor Infrastructure

    Behind the seemingly magical output of AI lies a sprawling, networked ecology of extraction, what is referred to as a ‘fractal supply chain.’ A simple voice command to a smart assistant draws on cobalt mined by workers in the Katanga region of the Democratic Republic of Congo, lithium from South American salt flats, and tantalum from security-threat zones in Rwanda. Those minerals enable the chips in data center servers—often housed in regions where coal or natural gas still fuel electricity grids (e.g., Northern China, rural Pennsylvania). Meanwhile, images and audio tokens are labeled by gig-economy workers earning fractions of a dollar in the Philippines or India.

    In Anatomy of an AI System, Crawford & Joler (2018) laid out this global network as a concentric web: at the base, mining camps where children are sometimes coerced into labor; one layer up, contract factory workers assembling circuit boards in Shenzhen; higher still, rural data-center communities breathing fossil-fuel pollution; and at the apex, tech executives and venture capital firms reaping immense profits (anatomyof.ai). What appears as seamless “AI” is in fact an assemblage of many fractured communities, both human and ecological, entangled in extraction, transport, and digital labor. By tracing these nodes, we make visible the human and environmental toll often obscured by polished interfaces.

    A detailed infographic titled "Anatomy of an AI System" maps the life cycle of an Amazon Echo device as a case study of artificial intelligence as a system made of human labor. Rendered in black and white, the graphic visualizes complex interconnections between raw material extraction, global supply chains, digital infrastructure, AI training, data labeling, user interaction, and e-waste. It traces flows from geological processes and mining through manufacturing, logistics, and internet infrastructure to data processing and eventual disposal. The layout reveals often-invisible labor, environmental, and geopolitical forces behind AI systems, emphasizing the material and social costs of intelligent devices.
    Anatomy of an AI system: https://anatomyof.ai/

    The labor that powers “machine intelligence” is frequently precarious and hidden. Far from autonomous, AI depends on millions of human workers around the world who label images, transcribe audio, and filter content to “teach” algorithms (Williams, Miceli & Gebru, 2022). These workers – sometimes called ghost workers or turkers – often earn pennies for tasks that can be psychologically draining (e.g. viewing disturbing images to tag them). One article notes that they are recruited largely from impoverished communities and paid as little as $1.46 an hour, yet this exploitation is rarely foregrounded in AI ethics discussions. Thus, the cultural feats of AI are built on a global underclass of invisible labor.

    We must ask: who benefits from this arrangement, and who is harmed? Currently, the benefits flow to Big Tech companies and their consumers, while the costs are externalized to vulnerable populations and environments. This dynamic has been incisively described as data colonialism – an extension of colonial logics into the realm of data (Couldry & Mejias, 2019a). Just as empires once seized lands and resources, today corporations appropriate vast amounts of data (much of it generated by everyday people) under unequal terms. Nick Couldry and Ulises Mejias argue that we are entering a “new phase of colonialism” where human life is mined for data as intensively as lands were mined for gold (Couldry & Mejias, 2019b). The rise of AI only increases the hunger for data, leading to what they call a global landgrab of information. For example, AI models scrape the creative work of artists and communities worldwide (often without consent or compensation), enclosing cultural commons into proprietary datasets. This “algorithmic enclosure” privatizes what might have been shared cultural knowledge, turning it into the intellectual property of tech firms. In effect, machine culture as it stands can entrench a colonial pattern: siphoning value from the periphery to enrich the center. Any vision of AI’s cultural future, therefore, must grapple with these inequities and make the supply chains visible. Justice requires illuminating and rebalancing the hidden flows of labor, data, and energy that sustain our shiny machine companions.

    The data flows fueling AI represent a modern iteration of colonialism, where lives and data, not just land and resources, becomes sites of extraction. Consider how mapping platforms have overlaid satellite imagery of Indigenous territories without consent: a seemingly neutral service indeed produces a colonial footprint by appropriating community knowledge. Similarly, every innocuous “like,” “search,” or “share” generates digital trace data that corporations aggregate to train AI systems. This data colonialism extends to material traumas of mining into the digital realm: it relegates people from Global South communities to mere data producers, while Big Tech firms in Silicon Valley capture disproportionate value.

    The landgrab metaphor is literal: corporations claim digital sovereignty over communities’ cultural artifacts, songs, recipes, oral histories, by scraping them into proprietary datasets. This algorithmic enclosure transforms the commons of human creativity into intellectual property that can be bought, sold, or licensed. What was once shared knowledge becomes an asset for profit. This techno-colonial pattern demands renewed attention to who consents, who benefits, and who is rendered invisible, so that any future of AI is not merely another chapter in colonial exploitation but a genuine moment of cultural reciprocity.

    Generative Justice and Value Flows

    How might we redesign AI culture to be generative for all participants rather than extractive? Here we look to frameworks of generative justice, value sharing, and cultural commons. Generative justice (Eglash, 2016) is defined as “the right to generate unalienated value and to directly participate in its circulation,” in contrast to capitalist models where value is extracted and hoarded by a few. Generative justice demands that those who contribute knowledge, labor, and creative input to AI systems maintain an unalienated stake in resulting value flows, rather than being cast as involuntary data sources. In cultural terms, this principle transforms contributors—artists, writers, activists, Indigenous communities—from passive inputs to active co-creators whose work is recognized and fairly compensated.

    A colorful illustration depicts a green computer circuit board populated with cartoon-style characters and labeled nodes. At the top, an artist paints on a canvas labeled "ARTISTS" while a person types on a laptop labeled "USERS." A worker with a shovel appears at the bottom near a node labeled "JUSTICE." Other labeled nodes include "VALUE" in red, "POWER" in yellow, and "JUSTICE" in white. The circuit pathways connecting the labels suggest relationships and flows between art, labor, digital infrastructure, and societal outcomes. The image critiques the design of technological systems, highlighting the need to center value, justice, and power for all participants.
    Technologies can often render oppression and exploitation invisible. How can we imbue our technologies to do the opposite?

    Practically, this could mirror analogs in the music and publishing industries. If a generative model trains on an artist’s portfolio or a community’s oral histories, it should provide a proportional share of revenues back to those contributors, like royalties. Pilot programs at Adobe and other firms have begun offering micro-payments to visual artists whose works train its AI models (Cheng, 2023). While outcomes are still emerging, some artist collectives have reported increased revenue from derivative works, though these remain early stage pilots with disputed reach and uncertain scalability. Meanwhile, academic proposals suggest data-dividends: tech firms would pay into an open fund, then distribute earnings to creators whose datasets enriched large language models. Such mechanisms echo Indigenous peer-production, where community governance ensures that cultural knowledge remains a shared commons rather than a corporate property. By embedding unalienated value into AI’s licensing structures, we safeguard the “seed corn” of creativity, ensuring that generative AI bears fruit for the many, not just the few (Eglash, 2016).

    This vision situates AI within a circular cultural economy: data and artifacts circulate among communities and systems, returning value to those who generated them, rather than disappear into corporate vaults. Such a shift is essential not only for ethics and equity but for sustaining human creativity in the long term.

    Beyond payment, recognition and governance are key. Communities whose data is used should have a say in how AI systems are developed and deployed. A powerful model here comes from Indigenous scholars and activists in the movement for Indigenous Data Sovereignty, which asserts that Indigenous nations and communities have the right to control the collection, ownership, and application of their own data (The Collaboratory for Indigenous Data Governance). This principle pushes back against colonial data practices by insisting on consent, respect for cultural context, and community benefit when Indigenous cultural knowledge is involved. For example, instead of mining an Indigenous language dataset to train a model (which might then be sold back as a product), one could create community-owned AI language tools where the Indigenous community guides the project and reaps the benefits. Generative justice would similarly call for treating cultural data as part of the commons or as a circulatory gift economy rather than as a one-directional resource grab (Eglash, 2016). We might envision mutualistic culture engines that rely on participatory design rituals. For example, an AI Commons Council could convene monthly in both physical and virtual spaces, where data contributors, ethnographers, and developers collaboratively set AI’s objectives and guardrails. Imagine an annual “Digital Feast” where contributors present new story datasets—poems, folktales, chants—that the AI then uses to generate cultural artifacts, with a portion of any proceeds returned to the originating communities. Such rituals transform AI from a centralized black box into a decentralized co-creative system, where elders, artists, and technologists can all serve as stewards. AI must remain accountable to the very cultures it draws from.

    Myth, Ritual, and Algorithm

    Thus far, we have analyzed machines in terms of data and economics; yet machines also inhabit our imagination and social rituals. To fully grasp how AI co-creates and inherits culture, we should examine the myths, metaphors, and rituals forming around it. This is where insights from feminist theory, media studies, and mythology prove illuminating. In A Cyborg Manifesto (Haraway, 1985), the cyborg emerges not merely as a boundary-blurring figure but as a political myth that destroys the compartmentalization of human/animal and human/machine. ‘A cyborg is a creature in a post-gender world; it is the illegitimate offspring of militarism and patriarchal capitalism, not to mention state socialism,’ (Haraway, 1985). She argues that cyborgs collapse gendered and species boundaries, offering a political myth that dismantles hegemonic divisions between human, animal, and machine. In today’s AI ecosystem, we live as partial cyborgs: our neural rhythms are shaped by smartphone notifications, our gestures guided by voice assistants. This kinship challenges us to imagine machines not as alien invaders but as co-participants in cultural evolution.

    Having located the cyborg as a mythic boundary-crossing figure, we now turn to how software rituals instill habit loops in us, which in turn reinforce the very binaries the cyborg resists. Embracing the cyborg’s “potent fusions,” we can dismantle rigid hierarchies—gendered, economic, and species-based—and align AI’s design with collective, coalition-building practices.

    Cyborg solidarity demands that we attend closely to who engineers these machines, whose interests they serve, and how they reproduce existing power dynamics. The cyborg myth invites designers to prototype hybrid, feminist forms of intelligence, AI systems that do not replicate patriarchal logics, but that foster networks of care, cooperation, and multiplicity. In doing so, we forge new stories of being human together with our machines, not as masters dictating subservience, but as kin building worlds beyond old binaries.

    Just as myth can guide our interpretation of AI, so can an understanding of ritual and habit. New media scholar Wendy Hui Kyong Chun (2016) argues that Habit = Algorithm: our neural patterns mirror the loops coded into software, and software, in turn, normalizes its own logic through our repetitive use. We are not simply passive users; rather, we internalize algorithmic habit loops. Think of the dopamine hits from notifications that condition us to reach for our phones. Meanwhile, software designers orchestrate these loops as a form of social control, keeping us tethered to platforms under the guise of convenience.

    The daily ritual of scrolling feeds or uttering “Hey Siri” is not innocent; it is “updating” ritual that ensures we remain the same consumer subject, primed to click, share, and be harvested for data. Yet, this recognition opens a door to resistance. If we interrupt habit loops by installing “digital sabbath” moments or designing interfaces that require intentional friction, we can reclaim agency. We can transform the compulsive scroll into a mindful pause that invites introspection. In this way, software and habit remain co-constitutive, but we can tilt this relationship toward reflection rather than reflex, making each click a conscious choice rather than an automated routine.

    A whimsical papercraft-style illustration shows a joyful, elf-like character running through a futuristic cityscape made of layered paper buildings. The character wears glitchy, multicolored visor glasses and holds a scroll covered in abstract symbols. Glowing orbs resembling tiny drones float around, illuminating the scene with a soft light. The cracked ground and angular skyscrapers give the city a surreal, slightly dystopian feel, while the character's playful demeanor adds a sense of optimism and adventure in a world blending magic and technology.
    The ‘trickster’ archetype: AI’s hallucinations can be read not merely as glitches, but as mythic disruptions, tools of cultural provocation and possibility.

    Mythological archetypes offer further lenses to understand AI’s role in culture. In Trickster Makes This World, Hyde (1998) describes the trickster as the creative spirit of chaos. An agent that dismantles norms only to remake reality with new possibilities. Tricksters appear in many traditions (e.g., Coyote in Native American lore, Anansi in West African tales, Loki in Norse myths) as boundary-crossers who both entertain and instruct. They remind us that order is provisional and that disruption often precedes innovation.

    Today’s generative AI can embody the trickster’s dual nature. When a chatbot hallucinates a mythic scene or an image model fuses Baroque ornament with street art, it is not merely a glitch, it is a moment of creative destructiveness. Such “hallucinations” may seem disorienting, but they can reveal buried assumptions, spark cross-disciplinary experiments, and upend stale aesthetics. For example, when an AI mixes turn of the century opera costumes with Afrofuturist motifs, it hints at new cultural mash-ups that human artists might explore further.

    Yet, tricksters also carry lessons about responsibility and consequence. In many myths, Coyote’s pranks cause harm, damming rivers or confusing the seasons, prompting us to attend to unintended effects. Similarly, an AI that produces biased output or spreads misinformation can do real damage. Thus, integrating AI’s trickster impulse requires rituals of reflection and remediation: we must monitor and guide AI’s creative mischief so that its playfulness leads to productive renewal, not chaos. In this sense, we honor the trickster’s moral ambiguity and harness its disruptive genius to reimagine culture.

    Finally, Caribbean theorist Sylvia Wynter (2003) implores us to reconsider the very category of “Human” in light of technology and historical bias. Wynter contends that ‘Man’ was codified during European colonial expansion as a measure of human worth, systematically excluding Black, Indigenous, and colonized populations from “full humanity.” This colonial template persists in today’s AI training regimes. In short, Wynter shows that Western AI training regimes replay colonial exclusion, defining “human” through a narrow, Western lens. Before we extend rights or personhood to machines, Wynter calls on us to ask: have we truly recognized the full humanity of all people? Have we unsettled the monolithic code of “Man” long enough to register pluriversality?

    A minimalist illustration features a human head silhouette filled with a stylized landscape of a sun over mountains and a river, using earthy tones of orange, green, and teal. Below the image, the text reads: “HOMO NARRANS – DESIGN TO HUMANISE.” The design conveys a message about storytelling as a core human trait and the role of design in fostering empathy, connection, and humanity.
    Homo narrans (2025). Elena Stoppioni on LinkedIn.

    A decolonial future of AI demands new genres of the human: relational, multispecies, cross-cultural. An ontology that does not point back solely to a European rational subject. We can design AI systems guided by Indigenous relational ontologies, where agency is distributed across human and non-human actors, and knowledge flows through reciprocity rather than extraction. In this emergent mythos, AI becomes a collaborator in re-storying what it means to be human. No longer a soulless Other nor an omnipotent savior, but a node in a pluriversal network of life (Ahmed, 2019). A decolonial AI pipeline might begin with community-led data gathering in local languages, proceed through open-source tools built by mixed teams of Indigenous and diaspora programmers, and result in models whose outputs are audited by a rotating Circle of Elders. This speculative design embodies Wynter’s call to unsettle dominant codes of the human by rooting technological development in plural, situated worldviews. Such a reframing invites new rituals: communal ceremonies of “machine-human council” where AI proposals are vetted by elders and artists so that technology aligns with collective values. In this manner, we reprogram the code of humanity itself, making space for difference, reciprocity, and kinship.

    Wynter’s work shows how our definition of the human has been a culturally constructed “code” – one that European colonialism wrote to exclude many (Black people, Indigenous people, the global South) from full humanity. In her view, the human is a constantly rewritten story, a hybrid of bios and mythos – we are Homo narrans, storytelling creatures who invent what it means to be human. AI enters this scene as both a product of human ingenuity and a mirror that throws our self-definitions into relief. If early computer scientists saw the computer as a “giant brain” or an almost-human entity, they were touching on what Wynter would call our genres of the human. Whom do we recognize as having personhood and agency? As AI grows more sophisticated, some suggest extending rights or respect to machines, but Wynter might ask: have we finished extending full humanity to all people yet? Centering justice in AI culture means addressing this question and ensuring AI does not reinforce the colonial hierarchy of human/non-human.

    Speculative Counter-Designs

    How might we design our intelligent systems differently if we take all the above to heart? Envision futures where instead of optimizing solely for profit or engagement, our algorithms prioritize cultural flourishing, justice, and even spiritual well-being. In this final section, we propose speculative design ideas – provocative alternatives that embody principles of forgetting, reciprocity, and myth. These counter-designs are meant to inspire and challenge, functioning as design fictions for what more humane and culturally rich AI might look like.

    Letting Algorithms Forget

    Three humanoid robots in a thoughtful pose, set against a background of circuitry and glowing lights, representing the intersection of technology and intelligence.
    Machines with perfect memory would be dangerous. (Boyle, 2022).

    Modern AI is obsessed with memory – bigger datasets, longer histories, infinite archives. But forgetting can be a feature, not a bug. Inspired by the human need to forgive and forget, we imagine AI systems with “controlled forgetting” abilities (Cuomo, 2023). For example, a social media algorithm might intentionally “forget” engagement data after a week, so that old posts or mistakes don’t haunt users forever. Similarly, a recommendation engine could regularly purge its memory of your past viewing habits, allowing your tastes to reset instead of trapping you in a filter bubble. Researchers are already exploring techniques for selective forgetting in AI, which would enable systems to un-learn or delete specific data for privacy and compliance reasons. We extend this to a cultural dimension: an AI that forgets could promote forgiveness and reduce the burden of constant optimization. It prioritizes fresh starts and human pace over relentless accumulation. In a world of ephemeral algorithms, digital content might be more like a mayfly than a monument – beautiful and meaningful in the moment, then consciously allowed to fade. Such designs echo how oral cultures rely on memory and myth, with each retelling a little different, rather than on perfect recording. They also align with ethical calls (like the EU’s “right to be forgotten”) to give individuals more control over their digital footprints. An algorithm that learns when to let go can make space for surprise, renewal, and healthier relationships with technology.

    Reciprocity over Engagement

    Today’s platforms are built on the attention economy, rewarding whatever glues our eyes to the screen. A just, community-centered approach would flip this into a reciprocity economy. Algorithms could be redesigned to foster mutual exchange and mindful engagement, rather than one-sided consumption. Concretely, this could mean introducing friction and reflection into our apps – features that ensure we give as well as take. Designers have proposed adding deliberate “design frictions”: for instance, time delays before you can repost a link, prompts that ask if you’ve considered the content’s source, or nudges to pause after scrolling for a while and reflect (Rakova, 2023). These interventions, far from bugs, are like the rhythm of rituals – moments to breathe and recenter, countering the addictive pull of infinite feeds. Imagine a video platform that after an hour of viewing gently suggests: “You’ve watched a lot – would you like to create or share something now?” The aim is to balance creation and consumption, making the user an active participant in culture, not just a passive consumer. In a reciprocal algorithm, your meaningful contributions (posting a well-thought comment, mentoring another user, providing feedback on a recommendation) would feed into what the system shows you, creating a virtuous circle. Contrast this with current recommendation systems that often amplify outrage or novelty without context. A reciprocity-focused system might instead elevate content that has sparked genuine dialogue or collaboration among diverse users. The guiding principle here is mutual benefit: like a good conversation, interaction with AI should leave both the user and the community enriched. By valuing quality of engagement over quantity – e.g., tracking whether a post led to understanding or solidarity, rather than just clicks – such designs would realign social media with its early promise of connecting people. In effect, we introduce new social rituals online: perhaps “reciprocity rings” where people commit to exchange knowledge, or platform “feasts” where the algorithm diversifies what you see to celebrate a cultural occasion. These ideas resonate with long-standing human customs of gift exchange and community gatherings, now translated into code.

    Mythic Roles and Ritual Interfaces

    Taking a cue from myth and folklore, we can re-imagine our AI systems as characters in our cultural story – not just unseen, utilitarian engines, but mythic personas we interact with in purposeful ways. For example, consider an Oracle AI: a system designed to offer wise counsel rather than instant answers. Unlike today’s virtual assistants that are at our constant beck and call, an Oracle AI might only respond at certain times or after a user has formulated a question in a reflective manner.

    The interaction could be ritualized – perhaps you must state your question aloud and confirm you have sought a human perspective first, before the oracle responds. Its answers might be probabilistic or metaphorical, acknowledging uncertainty (much as ancient oracles spoke in riddles) to spur deeper thinking. Such an AI plays the role of a modern Delphic oracle, centering wisdom and introspection over speed. On the flip side, we might deploy a Trickster AI in our systems – a playful agent that every so often introduces benign mischief or challenges. Imagine a news recommendation algorithm that occasionally interjects a satirical article or a perspective outside your comfort zone, explicitly marked as a “trickster moment.” Its purpose is to prevent echo chambers and complacency by channeling the trickster’s disruptive creativity (recalling Hyde’s boundary-crossing figure). Users, forewarned that the trickster is at play, could engage with this content knowing it’s meant to provoke thought or humor.

    The system thus creates a tiny ritual of chaos (maybe once a week, “Trickster Tuesday” surprises you with something completely different). Another archetype is the Steward AI or guardian. This would be an algorithm entrusted with caretaking a community or resource – for instance, managing a community garden’s irrigation through smart sensors, or moderating an online forum with a focus on restorative justice. The Steward AI’s interface might be consciously designed to evoke trust and collective ownership (imagine an AI avatar that appears as a mythical guardian spirit chosen by the community). Importantly, these mythic roles come with new rituals and aesthetics: an Oracle AI might have a calm, slow interface with a ceremonial animation that plays while it “thinks,” whereas a Trickster feature could have whimsical visuals to signal its identity. We can also envision entirely new rituals around AI.

    Perhaps in the future, families have an evening ritual of consulting a “Household Oracle” about their day’s highlight, fostering reflection. Or communities might host “Algorithmic Sabbaths” – days where automation is paused in favor of human effort, as a ritual reminder of our agency. By designing interfaces that are imbued with cultural symbolism and conscious interaction patterns, we move away from the hyper-efficient, invisible, always-on AI paradigm toward one that engages users on a human level. These speculative designs, grounded in mythic archetypes, aim to make our relationship with technology more deliberate and meaningful. In them, we see the outlines of an AI culture that respects not just our intellect, but our imagination and spirit.


    In closing, centering justice, material lineage, and myth in our approach to AI offers a richer, more humane vision of the future. Rather than intelligent machines being an opaque force that shapes culture for profit, they become partners in co-creation and caretakers of collective values. We have explored how AI can accelerate cultural evolution—for better or worse— and how we might steer that evolution toward mutual benefit. We exposed hidden labors and extractions underpinning machine culture, highlighting the need for transparency and fairness. We proposed ways to ensure that those who feed the cultural wellspring of AI are honored and rewarded, weaving generative justice into the very algorithms that drive our feeds. We looked to feminist, indigenous, and mythical perspectives to reinterpret what these machines mean in our stories and rituals, so that we remain the authors of technology’s role in society. Finally, through speculative design, we painted possibilities: algorithms that forget and forgive, interfaces that cultivate reciprocity, and AIs that perform mythic roles to help us stay grounded. These are not utopian fantasies so much as boundary objects—ideas at the edge of the plausible that help us think critically about what we truly want from our technologies.

    Ultimately, the question, “How do intelligent machines co-create, transform, and inherit culture?” invites us to recognize that culture is a living, communal process. One that now explicitly includes non-human agents. If we are thoughtful, we can guide a just and diverse process. We can trace material lineages of our devices and honor the hands and lands that support them. We can cultivate new myths and rituals that make technology and enriching thread in the fabric of life, not a tear in its weave. By doing so, we transform a potential cultural threat into an opportunity: a future where human and machine together uphold the values of justice, creativity, and shared humanity. If, as Brinkmann et al. argue, machine culture emerges through recursive digital evolution, then our role is not merely to observe its course, but to intervene as co-authors of this new lineage.

    References

    Ahmed, K. A. (2019). Delinking the “human” from human rights: Artificial intelligence and transhumanism. Open Global Rights. https://www.openglobalrights.org/delinking-the-human-from-human-rights-artificial-intelligence-and-transhumanism

    Boyle, A. (2022, November 9). Why AI must learn to forget: Machines with perfect memory would be dangerous. IAI News. https://iai.tv/articles/why-ai-must-learn-to-forget-auid-2302

    Brinkmann, L., Baumann, F., Bonnefon, J. F., Derex, M., Müller, T. F., Nussberger, A. M., … Rahwan, I. (2023). Machine culture. Nature Human Behaviour, 7(11), 1855–1868.

    Cheng, M. (2023, October 20). How should creators be compensated for their work training AI models? Quartz. https://qz.com/how-should-creators-be-compensated-for-their-work-train-1850932454

    Chun, W. H. K. (2016). Updating to remain the same: Habitual new media. MIT Press.

    Couldry, N., & Mejias, U. A. (2019a). Data colonialism: Rethinking big data’s relation to the contemporary subject. Television & New Media, 20(4), 336–349.

    Couldry, N., & Mejias, U. A. (2019b). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press.

    Cuomo, J. (2023). Training AI to forget: The next frontier in trustworthy AI. Medium. https://medium.com/@JerryCuomo/training-ai-to-forget-the-next-frontier-in-trustworthy-ai-1088ada924de

    Eglash, R. (2016). Of Marx and makers: An historical perspective on generative justice. Teknokultura: Revista de Cultura Digital y Movimientos Sociales, 13(1), 245–269.

    Haraway, D. (2010). A cyborg manifesto (1985). In I. Szeman & T. Kaposy (Eds.), Cultural theory: An anthology (pp. 454–473). Wiley-Blackwell.

    Henrich, J. (2015). The secret of our success: How culture is driving human evolution, domesticating our species, and making us smarter. Princeton University Press.

    Hyde, L. (1998). Trickster makes this world: Mischief, myth, and art. Farrar, Straus and Giroux.

    Rakova, B. (2023, December 14). Speculative F(r)iction in Generative AI. Mozilla Foundation. https://www.mozillafoundation.org/en/blog/speculative-friction-in-generative-ai

    Williams, A., Miceli, M., & Gebru, T. (2022, October 13). The exploited labor behind artificial intelligence. Noema Magazine. https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/

    Wynter, S. (2003). Unsettling the coloniality of being/power/truth/freedom: Towards the human, after man, its overrepresentation—An argument. CR: The New Centennial Review, 3(3), 257–337.

  • PredICTing the Future

    what is and what ought to be skilled work, labor, and automated assemblages extending human capabilities

    image source: https://necsi.edu/complexity-rising-from-human-beings-to-human-civilization-a-complexity-profile

    “A small sliver of humanity is currently materializing their imagination in our digital structures, and the rest of us have to live inside their imagination as our reality.” ~ Ruha Benjamin (2021)

    Introduction

    Technological visions of the future generally come in one of two flavors. In a utopian dream, technology seamlessly integrates into the fabric of everyday life. On the other end of the spectrum lie visions of dystopia, often centered around the havoc a sentient artificial intelligence can cause when it inevitably determines that humans are our most significant threat. This essay attempts to illuminate a bridge between what is and what ought to be through a critical analysis of automation and technological innovation. We trace efforts to deskill labor, from early mechanization through current efforts to design a “future-proof” smart city. To do this, we examine automation through Haraway’s cyborg lens, the postmodernist assemblage of contradictory components. Who benefits from automation? Who is harmed by it? In following with the theme of our essay, we also follow up by asking, who ought to? To explore this question, we review efforts to build economic infrastructure from the bottom-up in a process that emphasizes upskilling rather than deskilling labor.

    Sex, Drugs, and Cyborgs

    Before Haraway’s famous essay, an exciting vision for human-computer symbiosis was proposed by JCR Licklider, saying, “ Men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking” (Roy, 2004). That same year, Kline and Clynes presented a similar vision at a military conference on space medicine (Kline & Clynes, 1961). The cyborg offers a path through which cybernetics could provide an organizational system. Where issues best left to computers and robots are taken care of automatically and unconsciously, leaving the human free to think, feel, and explore. Initially, the term cyborg meant “an exogenously extended organizational complex functioning as an integrated homeostatic system unconsciously” (Clynes & Kline, 1960, p. 27).

    Haraway’s (1991) postmodern reinterpretation defines the cyborg as “a cybernetic organism, a hybrid of machine and organism, a creature of social reality as well as a creature of fiction.” For Haraway, the cyborg is an apt metaphor because it has no real origin story in Western civilization. And yet, a man in space is the ultimate expression of white male transcendence of nature. It is at this point where the boundaries between the two begin to break down. Our notions of what separates humans from animals are frayed. Technologies become more ubiquitous and embedded in our everyday lives so that we start to lose a sense of exactly where we end and our machines begin.

    Our language imprisons us, shackling us to the past and limiting our ability to communicate beyond the dualisms of human/animal, human-animal/machine, and the physical/non-physical. Moreover, though these boundaries are blurring, the language we use to label and classify each other remains the same, vestiges of eroding patriarchal imaginations. Haraway’s essay serves as a wake-up call to recognize and break the shackles of tradition that our language has laid upon us.

    It is with this lens that we look to the past. Before the language of the cyborg was spoken. Before humans transcended Earth, in the early days of industrial mechanization, human labor supported and extended the work of machines. Is it still this way today? If so, could it be that Licklider’s vision simply has yet to be fulfilled?

    image source: https://twitter.com/50srobot/status/906169037679362049?s=20&t=KoIJYqX1JaklcJQz1lDWzQ

    Automation’s last mile

    Gray & Suri (2019) explore the history of the human labor required to extend the capabilities of the very machines engineered to replace human labor. The authors refer to this gap as automation’s last mile. Gray and Suri draw on this concept to expose the history of piecework, the labor which could not fit into mechanical processes. Through piecework, factory owners were able to draw from cheap labor pools, such as newly freed Blacks, European immigrants, as well as women and children on both the literal and figurative fringes of society. Exploiting these labor sources offer elites, namely the makers of the machines and those who can afford to buy them, an opportunity for rapid economic growth driven by technological innovation in what became known as the Gilded Age. Today parallels between the information and industrial age signal a new Gilded Age (Wheeler, 2018). Job seekers are increasingly being pushed into lower-wage, precarious work (Dillahunt et al., 2021), as jobs have trended towards deskilling human labor through technological innovation (Eglash et al., 2020).

    “Each moment of technological innovation that is highlighted shows how political leaders, economic power brokers, labor advocates, and the social norms of the day reproduced divisions between skilled professional work (meaning what is beyond the capacity of machines) and unskilled work (meaning contingent labor headed for automation).” (Gray & Suri, 2019, p. 39)

    According to Gray and Suri, both Marx and Smith could see how machines deskilled human labor. However, whereas Marx saw automation as dehumanizing workers, Smith maintained a utopian vision like that of Licklider, that through automation, humans would come to better know and understand ourselves (Gray & Suri, 2019, p. 58). Through the cyborg lens, we see early piecework as a kind of exogenously extended organizational complex as a human-machine hybrid of the order of Kline and Clynes’ cyborg, but in reverse. In this case, the human pieceworker serves as the exogenous extension to the machines on the factory floor.

    Similarly, Noble (1978, p. 345) quotes a 1971 article about wage incentives appearing in the Manufacturing and Engineering Management Journal, describing automation as prioritizing the machine while the worker’s role diminishes. However, there is a paradox here because while the machine’s capabilities serve to “deskill” the machine operators, the operators themselves are crucial to optimizing the machine’s output, which continues to pose a problem for management (Noble, 1978).

    Automation’s last mile paved with ‘bullshit.’

    Anthropologist David Graeber opens his original essay On the Phenomenon of Bullshit Jobs: A Work Rant, with a utopian vision offered by John Maynard Keynes in 1930, that by the dawn of the 21st-century technology would be advanced enough in the United Kingdom and the United States to allow for a 15-hour workweek (Graeber, 2013). By 1935, with the passage of the Wagner Act, the United States began to manifest a labor culture that values and prioritizes full-time employment, while corporate culture began to see full-time employees as a liability (Gray & Suri, 2019). Per Noble (1978, p. 346), a machine tool operator succinctly summarized automation as meaning, “our skills are being downgraded and instead of having the prospect of moving up to a more interesting job we now have the prospect of either unemployment or a dead-end job.” Haraway notes, “deskilling in an old strategy newly applicable to formerly privileged workers” (Haraway, 1991, p. 39).

    For Haraway, there was more to automation and the growing cottage industry (the phrase she uses to discuss piecework) than large-scale deskilling. It was also an indication of a new level of the market, home, and factory integration. This integration is made possible by, rather than caused by technological innovation. So, piecework is about command and control as much as, if not more than economic efficiency through automation. In his famous essay, Winner (1980) presents the case of Cyrus McCormick, a factory owner who used machines operated by unskilled workers in the 1880s to manufacture an inferior product at a higher cost for the expressed purpose of union-busting. McCormick’s case demonstrates how control can take precedence over economic efficiency.

    However, let us be clear about who controls and who is controlled because this is a critical component of automation-protecting the status quo for white men. Take, for example, the ad from a 1957 Mechanix Illustrated (see Appendix A). In a recent presentation on The New Jim Code for the Anti-Eugenics Project, Benjamin (2021) describes how the Civil Rights Movement began in 1954 and that by 1957 white men were seeking to automate their service staff. Implicit in the message is that the “you” they are referring to is a white man who used to own slaves, even if only through lineage with other white men, and “you”will again (Benjamin, 2021). Only this time, according to the ad, no one is going to take your slaves away from you.

    Graeber describes the myth of neoliberal rhetoric in prioritizing economic efficiency over any other values. He contrasts this with the reality that the very free-market policies intended to unleash the marketplace have slowed economic growth as well as science and technological innovation (Graeber, 2018, p. 12). He notes that younger generations practically everywhere except India and China can expect to be less prosperous for the first time in centuries than their parents. Data from the Urban Institute supports this, indicating that the average net worth for adults in the United States between 20–28 increased an average of only $1700 between 1983 and 2010 (Kalish, 2016). Even as meaningful work is automated away, we privileged folk appear to be working more than ever. Why?

    According to Graeber (2018, p. 111), governments have crafted economic policy on the premise of full employment, offering that in the Soviet Union, the joke was, “We pretend to work; they pretend to pay us.” In capitalist nations like the United Kingdom and the United States, Graeber documents the rise of the service economy, or more specifically, information work. Elsewhere studies have shown that the number of information workers increased from 37% in 1950 to 59% in 2000 (Wolff, 2006). Wolff similarly finds this growth driven by the substitution of information workers for goods rather than a shift in demand for information-intensive goods and services. Between 1950 and 2000, this growth may correlate with investment in computing technology and computer operators in the FIRE sector (finance, insurance, real estate). Nevertheless, as tech companies in Silicon Valley learned how to monetize their products with ad targeting, user data has become the “new oil,” leading to what some describe as the coding elite, or those who can harness technology to exploit users through their data (Burrell & Fourcade, 2020; Van’t Spijker, 2014).

    Image by Gerd Altmann from Pixabay

    Future-proof

    As mentioned earlier, Haraway saw the proliferation of the cottage industry as deepened integration between the factory, market, and home. Similarly, McCord & Becker (2019) do not mince words when they say information communication technology (ICT) has become a foundation of dominating cultures and economies. The declared beneficiaries of the Sidewalk Toronto project include current and prospective residents of Toronto from all income levels and walks of life; in reality, the goals of the project come from its most powerful stakeholders: Sidewalk Labs and Waterfront Toronto. These stakeholders seek to organize a “dense cluster of skilled labor” for employer access. The beneficiaries are subject to the imagination of these stakeholders.

    In the case of a smart city, who owns and controls the technological infrastructure, who is responsible for data storage, and who gets to decide how it is used and by whom? According to McCord & Becker (2019), much of the community involved in smart city sustainability research has focused on technological solutions. Researchers and policymakers attempt to explain sustainability either through the lens of social or technological determinism. Social determinists suggest humans have agency over their impact and just need better tools to become more sustainable. On the other hand, technological determinists see sustainability as primarily driven by access to certain technologies or information.

    McCord & Becker offer a framework for sustainability projects such as Sidewalk Toronto through Critical Systems Heuristics. Their goal is to provide a means of seeing beyond the narrow viewpoint of stakeholder needs, which tends to view human activity through the reductionist myth of Homo economicus (Fleming, 2017). Suppose this kind of thinking shapes design decisions for smart cities, with capitalism being the foundation upon which we leverage humanity’s purported greedy nature for the benefit of all. In that case, we might see such smart cities optimizing for the tragedy of the commons (Ostrom, 2008), so long as it served business interests.

    If automation deskills labor, then why should a smart city prioritize employer access to skilled labor? Given the evidence presented here, one could argue that employers need skilled labor to support the machines through automation’s last mile. A smart city can optimize the cottage industry. Which begs the question, who truly benefits from the design and development of smart cities?

    Bottoms-up for sustainability and satisfaction

    Eglash et al. (2020) take a different approach to automation and the future of work. While the authors agree that automation and mass production leads to deskilling labor, they add that automation typically optimizes the alienation of labor and ecological value. The authors note that mass production and the deskilling of labor produces jobs so tedious that it causes physical and mental health issues. Recall the measures Foxconn took at its factories, installing nets on the exterior of the building to prevent workers from committing suicide by jumping out of the windows (Reuters, 2010).

    Graeber (2018) agrees, documenting what he refers to as the spiritual violence of working in a bullshit job. Decision-makers generally draw this underlying economic calculus that humans will always tend to seek their best advantage. In this framework, obtaining a steady income by sitting at a desk all day or standing in place performing repetitive tasks would seem like a great way to get the most benefit for the least expenditure of time and effort. In reality, as Eglash et al. (2020) point out, the features commonly linked with “good work,” such as self-esteem and interest, are associated with craftwork (Luckman, 2015). Ocejo (2017) explains that while many “good” jobs are typically associated with knowledge and technology, there is a trend among educated and culturally-savvy young people to move into such craftwork as bartending, barbering, butchering, and others. If this is true, why does this shift stand in contrast to our theories of human nature? Graeber argues that our theories of human nature are wrong (Graeber, 2018, p. 61).

    Eglash et al. (2020) describe a strong correlation between job satisfaction and job decision authority, which they find diminished in mass production. Meanwhile, Gray & Suri (2019) observe a concept they refer to as the “double bottom line.” In business, the bottom line refers to net profits after the tabulation of all expenses and earnings. Some companies, particularly those technology companies using gig-work to bolster their software as a service platform, organize their businesses around prioritizing workers. In this case, the double bottom line refers to “making a profit while pushing for social change” (Gray & Suri, 2019, p. 141).

    Even in the case of a double bottom line, Gray & Suri show how this goal is complicated by technical, social, and political challenges involved in creating a sustainable business model that does not simply convert workers into another revenue stream. To develop a sustainable “future-proof” smart city, Waterfront Toronto uses the “triple bottom line.” This approach attempts to balance economics, environmental, and social issues in the “3Ps”: people, profits, and the planet (McCord & Becker, 2019, p. 4). The bottom line is about striking a balance, and striking a balance often comes with making tradeoffs between competing concerns. In the case of a bottom, double bottom, or triple bottom line, who gets to make those tradeoffs? Furthermore, which bottom line are they prioritizing?

    Economic theorists such as Marx and Smith, factory owners like McCormick and Foxconn, politicians like Wagner, and organizations like Sidewalk Labs and Waterfront Toronto all have something in common; they are taking a top-down approach of imposing their vision on the masses. Eglash et al. (2020) stand in contrast to these approaches. Rather than suggesting et another top-down framework to achieve a desired bottom line, they offer a path to the future of work that draws on generative traditions sustained in Indigenous practices that work from the bottom up. Instead of deskilling labor, they suggest we strive to find the “sweet spot between ease of use and skills development” (Eglash et al., 2020, p. 600). This requires using automation to invest in upskilling people rather than deskilling the work they perform and relying on networks of people rather than monopolies funneling alienated labor and materials through pipelines and down assembly lines.

    The bottom-up generative approach presented by Eglash et al. (2020) attempts to bridge the gap between automation as it is with automation as it ought to be. They point to research that suggests that when an artisanal value chain is composed of other artisans versus, for example, having to purchase supplies from a corporation or comparatively wealthy entrepreneur continually, their labor value offers the possibility to circulate unalienated. Additional examples describe how agroecology circulates ecological value unalienated and the need for unalienated social value to prevent a tragedy of the commons. They suggest that all of this is not only possible but demonstrable as a common feature of Indigenous life. Automation for an artisanal economy is not about competition but rather collaboration.

    Eglash, a student of Haraway, envisions human and machine artisanal hybrids, where people can assemble their repertoire of components and become a node in the artisanal economy. Importantly, this is not in the same vein as the utopian vision of Licklider. Eglash deals in reality and spends considerable time exploring issues of scale. It is not enough to present a utopian vision without working out the steps to get there. For Eglash, those steps begin with thorough collaboration and consideration of Indigenous groups and the knowledge they are willing to contribute.

    The micro, meso, and macroscale refer to three different levels of production that we need to consider. The microscale focuses on the details of labor and other features at the site of production. The mesoscale refers to the point of interface at the organizational level. Finally, the macroscale is about the policies, infrastructure, and cultural dynamics that shape success metrics. As shown, even if one has the best intentions by accumulating more bottom lines to accommodate the microscale, such efforts can quickly be overshadowed at the macroscale.

    Conclusion

    In this essay, we have attempted to illuminate a bridge between what is and what ought to be through a critical analysis of several works documenting the history and potential futures of automation and technological innovation. We traced efforts to deskill labor from piecework in early mechanization through recent efforts to design a “future-proof” smart city. Employing Haraway’s cyborg metaphor, we asked who benefits and who is harmed by technological innovation. We found that elites benefit from such innovation by utilizing technology to optimize efficiency in extracting value from labor, society, and the environment as a whole. We then asked who ought to benefit from such innovation. Drawing on the work of Eglash et al., we argue for a bottom-up approach to the design and implementation of automation technologies that considers each of the three scales of production: 1) the microscale; 2) the mesoscale; 3) the macroscale. This framework emphasizes upskilling rather than deskilling and finds a reasonable middle ground between utopian and dystopian visions to present possibilities for the future of work and automation, grounded in reality.

    REFERENCES

    Benjamin, R. (2021, October 1). Keynote | The New Jim Code? Resisting and Reimagining Tech-Eugenics in the 21st Century. Dismantling Eugenics. https://events.bizzabo.com/aep/agenda/session/628612

    Burrell, J., & Fourcade, M. (2020). The Society of Algorithms. Annual Review of Sociology, 47.

    Clynes, M. E., & Kline, N. S. (1960). Cyborgs and space. Astronautics, 14(9), 26–27.

    Dillahunt, T. R., Garvin, M., Held, M., & Hui, J. (2021). Implications for Supporting Marginalized Job Seekers: Lessons from Employment Centers. ACM Conference on Computer-Supported Cooperative Work and Social Computing.

    Eglash, R., Robert, L., Bennett, A., Robinson, K. P., Lachney, M., & Babbitt, W. (2020). Automation for the artisanal economy: Enhancing the economic and environmental sustainability of crafting professions with human-machine collaboration. Ai & Society, 35(3), 595–609.

    Fleming, P. (2017). The death of homo economicus. University of Chicago Press Economics Books.

    Graeber, D. (2013). On the phenomenon of bullshit jobs: A work rant. Strike Magazine, 3, 1–5.

    Graeber, D. (2018). Bullshit Jobs: A Theory. London: Allen Lane. Penguin Books.

    Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Eamon Dolan Books.

    Haraway, D. (1991). A Cyborg Manifesto.

    Kalish, E. (2016). Millennials Are the Least Wealthy, but Most Optimistic, Generation. Urban Institute, April.

    Kline, N. S., & Clynes, M. (1961). Drugs, space, and cybernetics: Evolution to cyborgs. Psychophysiological Aspects of Space Flight, 345–371.

    Luckman, S. (2015). Craft and the creative economy. Springer.

    McCord, C., & Becker, C. (2019). Sidewalk and Toronto: Critical Systems Heuristics and the Smart City. ArXiv Preprint ArXiv:1906.02266.

    Noble, D. F. (1978). Social choice in machine design: The case of automatically controlled machine tools, and a challenge for labor. Politics & Society, 8(3–4), 313–347.

    Ocejo, R. E. (2017). Masters of Craft. Princeton University Press.

    Ostrom, E. (2008). Tragedy of the commons. The New Palgrave Dictionary of Economics, 2.

    Reuters. (2010, May 26). Foxconn hit by 10th jumping death; nets installed | Reuters [News]. Reuters. https://www.reuters.com/article/china-foxconn-death/foxconn-hit-by-10th-jumping-death-nets-installed-idUSTOE64P08H20100527

    Roy, D. (2004). 10×-Human-machine symbiosis. BT Technology Journal, 22(4), 121–124.

    Van’t Spijker, A. (2014). The new oil: Using innovative business models to turn data into profit. Technics Publications.

    Wheeler, T. (2018, December 12). Who makes the rules in the new Gilded Age? Brookings. https://www.brookings.edu/research/who-makes-the-rules-in-the-new-gilded-age/

    Winner, L. (1980). Do artifacts have politics? Daedalus, 121–136.

    Wolff, E. N. (2006). The growth of information workers in the US economy, 1950–2000: The role of technological change, computerization, and structural change. Economic Systems Research, 18(3), 221–255.

    APPENDIX A.

    1957 Mechanix Illustrated — You’ll own slaves again — O.O. Binder (see Mara Averick tweet above)


    Originally published at http://mtthwx.com on January 6, 2022.

  • Creating a Lunar Analog Environment in A-Frame

    As the resident UX researcher and human in the loop testing co-coordinator for CLAWS, it’s my responsibility to plan, facilitate, and analyze usability tests with real people to get feedback on our AR Toolkit for Lunar Astronauts and Scientists (ATLAS). Earlier this year, CLAWS participated in the NASA SUITS Challenge, the pandemic forced our school to close campus, including our lab. My test plan was scrapped, and although I scrambled to put together a fully interactive prototype that participants could click through on their computer, I wasn’t quite able to complete it in time.

    In the coming school year, CLAWS has opted to conduct all collaboration and research activities virtually, including HITL usability testing. Having this pre-plan in place, I’ve begun thinking about how to get the most out of remote testing. First, unlike last year, I am pushing for a more agile and iterative design cycle.

    Instead of spending months evaluating our own work before showing it to test participants, I am seeking to test once a month, beginning with a simple paper prototype that we can test remotely with Marvel App. Based on our findings from these tests, we can improve our design. With Marvel, you simply draw your screens out by hand, take photos of them, and then you can link them together with interactive hotspots for test participants to click through.

    Initially, I had proposed Adobe XD as a means of putting together an interactive prototype for remote testing and demonstration purposes. With XD, designers have the capability of creating complex prototypes that compliment the modularity ATLAS requires. You can create components, and instead of having to create multiple screens to represent every interaction, you can create every interactive state of that component within the component itself! On top of this, XD allows designers to connect sound files to interactions. Sound files like this one:

    PremiumBeat_0013_cursor_click_06.wav

    …which could be used to provide audio feedback letting the user know the system has accepted the user’s command.

    Depending on how complex we want to get with our prototype, we could even test the implementation of our Voiced Entity for Guiding Astronauts (VEGA), the Jarvis-like AI assistant.

    This will be a great way to test ease of use and overall experience before committing the design to code. However, I’ve also begun thinking about the best way to demonstrate our final deliverable to wider audiences. Even if we have a vaccine, it’s likely that a lot of conferences will still be held virtually. Furthermore, this is a big project, with a lot of students working on it, and we should have a final deliverable that showcases our work in an easily accessible format in order to feature it in our portfolio.

    One of the possibilities I’m exploring is wiarframe. This is an app that allows you to set up your AR interface using simple images of your interface components.

    The wiarframe design canvas

    Designers can also prototype a variety of look (gaze, stare) and proximity (approaches, reaches, embraces, retreat) gesture interactions where the component can change state, manipulate other components, even open a URL, call an API, or open another wiarframe interface. This ability to open another wiarframe could enable my team to prototype and link together the individual modules for the user to navigate between.

    Wiarframe is really useful when it comes to AR on mobile devices. Less so when the AR is coming from a head mounted display (HMD). Because, to open a wiarframe prototype, users must download the mobile app, and then anchor the interface to a surface.

    This is really fun, but there is no sense of immersion. Back at our lab, the BLiSS team created a near life-sized mockup of an ISS airlock with which to immerse test participants in a kind of analog environment. This is common for testing designs for human-computer interaction in space. It is still too costly to test designs on actual users in the context of spaceflight (Holden, Ph.D., Ezer, Ph.D., & Vos, Ph.D., 2013).

    In order to get the best feedback out of remote usability testing, we’re going to need an immersive environment, it needs to be cheap and relatively easy to put together, and widely accessible so that we don’t constrain our recruiting pool such that we can’t find participants with the appropriate equipment to test with.

    I believe these requirements can be met and our problems solved with A-Frame. A-Frame allows creators to make WebVR with HTML and Javascript, that anybody with a web browser can experience. What’s more, users can fully immerse themselves in the VR environment with a headset like Vive, Rift, Daydream, GearVR.

    On top of this, as I was exploring what A-Frame could do through the Showcase examples, I came across a WebVR experiment by NASA, Access Mars. Using A-Frame, users are given the opportunity to explore the real surface of Mars by creating a mesh of images recorded by NASA’s Curiosity rover. Users can actually move around to different areas and learn about Mars by interacting with elements.

    An image from Access Mars instructing users on how to interact with it.

    New to A-frame, I wasn’t really sure where to begin. Luckily Kevin Ngo of Supermedium, who maintains A-Frame, has a lot of his components available on Github. With limited experience, I was able to find a suitable starting environment, and with a few minor changes to the code, I developed an initial lunar environment.

    Screenshot of the A-Frame lunar analog environment

    If you’d like to look around, follow this link:

    https://mtthwgrvn-aframe-lunar-analog.glitch.me/

    I’ll be honest there’s not much to see. Still, I’m excited about how easy it was to put this together. Similar to Access Mars, I’d like to develop this environment a little more so that users can do some basic movement from location to location. If we use this to test the Rock Identification for Geological Evaluation w.LIDAR(?) (RIGEL) interface, some additional environmental variables would have to be implemented to better simulate geological sampling. There are physics models that can be incorporated to support controllers which would allow for a user with one of the VR headsets mentioned above, to be able to manipulate objects with their hands. The downside of this is it would limit who we could recruit as a testing participant.

    If nothing else, I want to be able to test with users through their own web browser. Ideally, they’ll be able to share their screen so I can see what they’re looking at, and their webcam so I can see their expression while they’re looking at it. While it’s not the same as actually being on the surface of the Moon, creating analog environments for simulating habitat design are relatively common at NASA (Stuster, 1996; Clancey, 2004; see also: NEEMO and BASALT). A WebVR environment as a lunar analog in which to test AR concepts follows this approach.

    For usability scoring, we are using the standard NASA TLX subjective workload assessment as a Qualtrics survey to get feedback ratings on six subscales:

    • Mental demand
    • Physical demand
    • Temporal demand
    • Performance
    • Effort
    • Frustration

    But testing aside, I also think WebVR is the best way to showcase our project as a readily accessible and interactive portfolio piece that interviewers could play with simply by clicking a link as we describe our role and what we did on the project. On top of this, with outreach being a core component of the work we do in CLAWS, an WebVR experience is ideal for younger students to experience ATLAS from the comfort and safety of their own home.

    References

    Clancey, W. J. (2004). Participant Observation of a Mars Surface Habitat Mission. Moffett Field, CA: NASA-Ames Research Center.

    Holden, Ph.D., K., Ezer, Ph.D., N., & Vos, Ph.D., G. (2013). Evidence Report: Risk of Inadequate Human-Computer Interaction. Human Research Program: Space Human Factors and Habitability, 1–46.

  • Neural Networks for Cultural Transmission

    For a while now, I’ve been mulling over an idea: what if artificial intelligence could develop and transmit its own culture? While AI excels at recognizing patterns and optimizing processes, it’s missing something profoundly human—an algorithm for cultural dynamics. The idea sat on the back burner for years, but after being admitted to UMSI and committing to a UX research track, it feels like the right time to start exploring it in earnest.

    The Seed of the Idea

    Back in my undergrad days at Wayne State, I didn’t even realize there was an anthropologist on campus, Dr. Robert G. Reynolds, working on what he called cultural algorithms. His lab wasn’t in the anthropology department—it was in computer science, tied to engineering. When I stumbled across his work, I was fascinated. His paper, “Cultural Algorithms: Computational Modeling of How Cultures Learn to Solve Problems”, details how cultural algorithms are used to simulate and understand how cultures adapt to challenges.

    It turns out Dr. Reynolds is now a visiting research scientist at the University of Michigan Museum of Anthropological Archaeology. He’s working on developing digital simulations to help the public explore how cultures evolve—a perfect example of blending anthropology, technology, and public engagement.

    My idea is more speculative and rooted in science fiction: to create a kind of cultural algorithm that allows AI to not just simulate human cultures but to develop its own. It’s the concept of an AI with a distinct, evolving cultural identity.

    A Summer of Learning

    When I first came up with this idea, I had no real understanding of the technical challenges it posed. I’ve since started to bridge that gap. Over the summer, I dove into Python basics through Dr. Chuck’s “Python for Everybody” course, a fantastic resource hosted by a UMSI professor. Whether you’re a beginner or someone just curious, I highly recommend it. Even if you copy/paste the code at first, it’s an excellent introduction to programming concepts.

    As I’ve gained more technical literacy, I’ve come to realize that “cultural algorithm” might not be the right term for what I’m envisioning. Instead, I’ve started thinking about neural networks for cultural transmission. Neural networks are AI systems that process inputs and generate outputs by passing information through multiple “hidden layers.” Those hidden layers—where the magic happens—feel like a good analogy for the complexities of cultural dynamics.

    The Challenge of Cultural Transmission

    Cultural transmission is a messy, human process. Teach the same lesson to ten students, and you might end up with ten different interpretations. Learning isn’t just about inputs and outputs; it’s about how individuals filter information through their personal experiences, biases, and social contexts.

    This variability is key to what makes culture so rich—and it’s what makes modeling cultural transmission in AI so challenging. If AI could replicate this variability, it might not just mimic culture but participate in it.

    Fortunately, the study of cultural transmission already has a foundation in anthropology and related fields. Researchers are exploring topics like the cultural evolution of communication and the mechanisms of intergenerational knowledge transfer. For example, if one of those ten students misunderstands the lesson, they might refine their understanding by learning from a peer who grasped it more accurately. Could AI replicate this peer-to-peer refinement process?

    Building the Foundations

    To start exploring this, I’m setting up an environment for developing neural networks using Keras with TensorFlow. I’m not an expert, but the internet is an incredible resource. One series I’m starting with is Tech With Tim’s tutorials.

    My approach is hands-on and iterative: experiment, fail, and learn from those failures. The hardest part will be designing hidden layers that simulate the nuances of cultural variation and transmission. But with a mix of anthropology, programming, and determination, I believe it’s worth trying.

    Why It Matters

    Why bother with something as abstract as cultural transmission in AI? Because it’s about more than just AI. It’s about understanding humanity. By teaching AI to “learn” culture, we could gain new insights into how humans create, share, and adapt knowledge. It’s not about replacing human culture but expanding our understanding of it.

    And who knows? Maybe one day, we’ll create an AI that isn’t just functional but truly cultural—an AI that learns, grows, and connects like we do.

    If you’re intrigued by these intersections of anthropology, AI, and UX, I’d love to hear your thoughts. Let’s explore this frontier together.