Category: Uncategorized

  • PredICTing the Future

    what is and what ought to be skilled work, labor, and automated assemblages extending human capabilities

    image source: https://necsi.edu/complexity-rising-from-human-beings-to-human-civilization-a-complexity-profile

    “A small sliver of humanity is currently materializing their imagination in our digital structures, and the rest of us have to live inside their imagination as our reality.” ~ Ruha Benjamin (2021)

    Introduction

    Technological visions of the future generally come in one of two flavors. In a utopian dream, technology seamlessly integrates into the fabric of everyday life. On the other end of the spectrum lie visions of dystopia, often centered around the havoc a sentient artificial intelligence can cause when it inevitably determines that humans are our most significant threat. This essay attempts to illuminate a bridge between what is and what ought to be through a critical analysis of automation and technological innovation. We trace efforts to deskill labor, from early mechanization through current efforts to design a “future-proof” smart city. To do this, we examine automation through Haraway’s cyborg lens, the postmodernist assemblage of contradictory components. Who benefits from automation? Who is harmed by it? In following with the theme of our essay, we also follow up by asking, who ought to? To explore this question, we review efforts to build economic infrastructure from the bottom-up in a process that emphasizes upskilling rather than deskilling labor.

    Sex, Drugs, and Cyborgs

    Before Haraway’s famous essay, an exciting vision for human-computer symbiosis was proposed by JCR Licklider, saying, “ Men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking” (Roy, 2004). That same year, Kline and Clynes presented a similar vision at a military conference on space medicine (Kline & Clynes, 1961). The cyborg offers a path through which cybernetics could provide an organizational system. Where issues best left to computers and robots are taken care of automatically and unconsciously, leaving the human free to think, feel, and explore. Initially, the term cyborg meant “an exogenously extended organizational complex functioning as an integrated homeostatic system unconsciously” (Clynes & Kline, 1960, p. 27).

    Haraway’s (1991) postmodern reinterpretation defines the cyborg as “a cybernetic organism, a hybrid of machine and organism, a creature of social reality as well as a creature of fiction.” For Haraway, the cyborg is an apt metaphor because it has no real origin story in Western civilization. And yet, a man in space is the ultimate expression of white male transcendence of nature. It is at this point where the boundaries between the two begin to break down. Our notions of what separates humans from animals are frayed. Technologies become more ubiquitous and embedded in our everyday lives so that we start to lose a sense of exactly where we end and our machines begin.

    Our language imprisons us, shackling us to the past and limiting our ability to communicate beyond the dualisms of human/animal, human-animal/machine, and the physical/non-physical. Moreover, though these boundaries are blurring, the language we use to label and classify each other remains the same, vestiges of eroding patriarchal imaginations. Haraway’s essay serves as a wake-up call to recognize and break the shackles of tradition that our language has laid upon us.

    It is with this lens that we look to the past. Before the language of the cyborg was spoken. Before humans transcended Earth, in the early days of industrial mechanization, human labor supported and extended the work of machines. Is it still this way today? If so, could it be that Licklider’s vision simply has yet to be fulfilled?

    image source: https://twitter.com/50srobot/status/906169037679362049?s=20&t=KoIJYqX1JaklcJQz1lDWzQ

    Automation’s last mile

    Gray & Suri (2019) explore the history of the human labor required to extend the capabilities of the very machines engineered to replace human labor. The authors refer to this gap as automation’s last mile. Gray and Suri draw on this concept to expose the history of piecework, the labor which could not fit into mechanical processes. Through piecework, factory owners were able to draw from cheap labor pools, such as newly freed Blacks, European immigrants, as well as women and children on both the literal and figurative fringes of society. Exploiting these labor sources offer elites, namely the makers of the machines and those who can afford to buy them, an opportunity for rapid economic growth driven by technological innovation in what became known as the Gilded Age. Today parallels between the information and industrial age signal a new Gilded Age (Wheeler, 2018). Job seekers are increasingly being pushed into lower-wage, precarious work (Dillahunt et al., 2021), as jobs have trended towards deskilling human labor through technological innovation (Eglash et al., 2020).

    “Each moment of technological innovation that is highlighted shows how political leaders, economic power brokers, labor advocates, and the social norms of the day reproduced divisions between skilled professional work (meaning what is beyond the capacity of machines) and unskilled work (meaning contingent labor headed for automation).” (Gray & Suri, 2019, p. 39)

    According to Gray and Suri, both Marx and Smith could see how machines deskilled human labor. However, whereas Marx saw automation as dehumanizing workers, Smith maintained a utopian vision like that of Licklider, that through automation, humans would come to better know and understand ourselves (Gray & Suri, 2019, p. 58). Through the cyborg lens, we see early piecework as a kind of exogenously extended organizational complex as a human-machine hybrid of the order of Kline and Clynes’ cyborg, but in reverse. In this case, the human pieceworker serves as the exogenous extension to the machines on the factory floor.

    Similarly, Noble (1978, p. 345) quotes a 1971 article about wage incentives appearing in the Manufacturing and Engineering Management Journal, describing automation as prioritizing the machine while the worker’s role diminishes. However, there is a paradox here because while the machine’s capabilities serve to “deskill” the machine operators, the operators themselves are crucial to optimizing the machine’s output, which continues to pose a problem for management (Noble, 1978).

    Automation’s last mile paved with ‘bullshit.’

    Anthropologist David Graeber opens his original essay On the Phenomenon of Bullshit Jobs: A Work Rant, with a utopian vision offered by John Maynard Keynes in 1930, that by the dawn of the 21st-century technology would be advanced enough in the United Kingdom and the United States to allow for a 15-hour workweek (Graeber, 2013). By 1935, with the passage of the Wagner Act, the United States began to manifest a labor culture that values and prioritizes full-time employment, while corporate culture began to see full-time employees as a liability (Gray & Suri, 2019). Per Noble (1978, p. 346), a machine tool operator succinctly summarized automation as meaning, “our skills are being downgraded and instead of having the prospect of moving up to a more interesting job we now have the prospect of either unemployment or a dead-end job.” Haraway notes, “deskilling in an old strategy newly applicable to formerly privileged workers” (Haraway, 1991, p. 39).

    For Haraway, there was more to automation and the growing cottage industry (the phrase she uses to discuss piecework) than large-scale deskilling. It was also an indication of a new level of the market, home, and factory integration. This integration is made possible by, rather than caused by technological innovation. So, piecework is about command and control as much as, if not more than economic efficiency through automation. In his famous essay, Winner (1980) presents the case of Cyrus McCormick, a factory owner who used machines operated by unskilled workers in the 1880s to manufacture an inferior product at a higher cost for the expressed purpose of union-busting. McCormick’s case demonstrates how control can take precedence over economic efficiency.

    However, let us be clear about who controls and who is controlled because this is a critical component of automation-protecting the status quo for white men. Take, for example, the ad from a 1957 Mechanix Illustrated (see Appendix A). In a recent presentation on The New Jim Code for the Anti-Eugenics Project, Benjamin (2021) describes how the Civil Rights Movement began in 1954 and that by 1957 white men were seeking to automate their service staff. Implicit in the message is that the “you” they are referring to is a white man who used to own slaves, even if only through lineage with other white men, and “you”will again (Benjamin, 2021). Only this time, according to the ad, no one is going to take your slaves away from you.

    Graeber describes the myth of neoliberal rhetoric in prioritizing economic efficiency over any other values. He contrasts this with the reality that the very free-market policies intended to unleash the marketplace have slowed economic growth as well as science and technological innovation (Graeber, 2018, p. 12). He notes that younger generations practically everywhere except India and China can expect to be less prosperous for the first time in centuries than their parents. Data from the Urban Institute supports this, indicating that the average net worth for adults in the United States between 20–28 increased an average of only $1700 between 1983 and 2010 (Kalish, 2016). Even as meaningful work is automated away, we privileged folk appear to be working more than ever. Why?

    According to Graeber (2018, p. 111), governments have crafted economic policy on the premise of full employment, offering that in the Soviet Union, the joke was, “We pretend to work; they pretend to pay us.” In capitalist nations like the United Kingdom and the United States, Graeber documents the rise of the service economy, or more specifically, information work. Elsewhere studies have shown that the number of information workers increased from 37% in 1950 to 59% in 2000 (Wolff, 2006). Wolff similarly finds this growth driven by the substitution of information workers for goods rather than a shift in demand for information-intensive goods and services. Between 1950 and 2000, this growth may correlate with investment in computing technology and computer operators in the FIRE sector (finance, insurance, real estate). Nevertheless, as tech companies in Silicon Valley learned how to monetize their products with ad targeting, user data has become the “new oil,” leading to what some describe as the coding elite, or those who can harness technology to exploit users through their data (Burrell & Fourcade, 2020; Van’t Spijker, 2014).

    Image by Gerd Altmann from Pixabay

    Future-proof

    As mentioned earlier, Haraway saw the proliferation of the cottage industry as deepened integration between the factory, market, and home. Similarly, McCord & Becker (2019) do not mince words when they say information communication technology (ICT) has become a foundation of dominating cultures and economies. The declared beneficiaries of the Sidewalk Toronto project include current and prospective residents of Toronto from all income levels and walks of life; in reality, the goals of the project come from its most powerful stakeholders: Sidewalk Labs and Waterfront Toronto. These stakeholders seek to organize a “dense cluster of skilled labor” for employer access. The beneficiaries are subject to the imagination of these stakeholders.

    In the case of a smart city, who owns and controls the technological infrastructure, who is responsible for data storage, and who gets to decide how it is used and by whom? According to McCord & Becker (2019), much of the community involved in smart city sustainability research has focused on technological solutions. Researchers and policymakers attempt to explain sustainability either through the lens of social or technological determinism. Social determinists suggest humans have agency over their impact and just need better tools to become more sustainable. On the other hand, technological determinists see sustainability as primarily driven by access to certain technologies or information.

    McCord & Becker offer a framework for sustainability projects such as Sidewalk Toronto through Critical Systems Heuristics. Their goal is to provide a means of seeing beyond the narrow viewpoint of stakeholder needs, which tends to view human activity through the reductionist myth of Homo economicus (Fleming, 2017). Suppose this kind of thinking shapes design decisions for smart cities, with capitalism being the foundation upon which we leverage humanity’s purported greedy nature for the benefit of all. In that case, we might see such smart cities optimizing for the tragedy of the commons (Ostrom, 2008), so long as it served business interests.

    If automation deskills labor, then why should a smart city prioritize employer access to skilled labor? Given the evidence presented here, one could argue that employers need skilled labor to support the machines through automation’s last mile. A smart city can optimize the cottage industry. Which begs the question, who truly benefits from the design and development of smart cities?

    Bottoms-up for sustainability and satisfaction

    Eglash et al. (2020) take a different approach to automation and the future of work. While the authors agree that automation and mass production leads to deskilling labor, they add that automation typically optimizes the alienation of labor and ecological value. The authors note that mass production and the deskilling of labor produces jobs so tedious that it causes physical and mental health issues. Recall the measures Foxconn took at its factories, installing nets on the exterior of the building to prevent workers from committing suicide by jumping out of the windows (Reuters, 2010).

    Graeber (2018) agrees, documenting what he refers to as the spiritual violence of working in a bullshit job. Decision-makers generally draw this underlying economic calculus that humans will always tend to seek their best advantage. In this framework, obtaining a steady income by sitting at a desk all day or standing in place performing repetitive tasks would seem like a great way to get the most benefit for the least expenditure of time and effort. In reality, as Eglash et al. (2020) point out, the features commonly linked with “good work,” such as self-esteem and interest, are associated with craftwork (Luckman, 2015). Ocejo (2017) explains that while many “good” jobs are typically associated with knowledge and technology, there is a trend among educated and culturally-savvy young people to move into such craftwork as bartending, barbering, butchering, and others. If this is true, why does this shift stand in contrast to our theories of human nature? Graeber argues that our theories of human nature are wrong (Graeber, 2018, p. 61).

    Eglash et al. (2020) describe a strong correlation between job satisfaction and job decision authority, which they find diminished in mass production. Meanwhile, Gray & Suri (2019) observe a concept they refer to as the “double bottom line.” In business, the bottom line refers to net profits after the tabulation of all expenses and earnings. Some companies, particularly those technology companies using gig-work to bolster their software as a service platform, organize their businesses around prioritizing workers. In this case, the double bottom line refers to “making a profit while pushing for social change” (Gray & Suri, 2019, p. 141).

    Even in the case of a double bottom line, Gray & Suri show how this goal is complicated by technical, social, and political challenges involved in creating a sustainable business model that does not simply convert workers into another revenue stream. To develop a sustainable “future-proof” smart city, Waterfront Toronto uses the “triple bottom line.” This approach attempts to balance economics, environmental, and social issues in the “3Ps”: people, profits, and the planet (McCord & Becker, 2019, p. 4). The bottom line is about striking a balance, and striking a balance often comes with making tradeoffs between competing concerns. In the case of a bottom, double bottom, or triple bottom line, who gets to make those tradeoffs? Furthermore, which bottom line are they prioritizing?

    Economic theorists such as Marx and Smith, factory owners like McCormick and Foxconn, politicians like Wagner, and organizations like Sidewalk Labs and Waterfront Toronto all have something in common; they are taking a top-down approach of imposing their vision on the masses. Eglash et al. (2020) stand in contrast to these approaches. Rather than suggesting et another top-down framework to achieve a desired bottom line, they offer a path to the future of work that draws on generative traditions sustained in Indigenous practices that work from the bottom up. Instead of deskilling labor, they suggest we strive to find the “sweet spot between ease of use and skills development” (Eglash et al., 2020, p. 600). This requires using automation to invest in upskilling people rather than deskilling the work they perform and relying on networks of people rather than monopolies funneling alienated labor and materials through pipelines and down assembly lines.

    The bottom-up generative approach presented by Eglash et al. (2020) attempts to bridge the gap between automation as it is with automation as it ought to be. They point to research that suggests that when an artisanal value chain is composed of other artisans versus, for example, having to purchase supplies from a corporation or comparatively wealthy entrepreneur continually, their labor value offers the possibility to circulate unalienated. Additional examples describe how agroecology circulates ecological value unalienated and the need for unalienated social value to prevent a tragedy of the commons. They suggest that all of this is not only possible but demonstrable as a common feature of Indigenous life. Automation for an artisanal economy is not about competition but rather collaboration.

    Eglash, a student of Haraway, envisions human and machine artisanal hybrids, where people can assemble their repertoire of components and become a node in the artisanal economy. Importantly, this is not in the same vein as the utopian vision of Licklider. Eglash deals in reality and spends considerable time exploring issues of scale. It is not enough to present a utopian vision without working out the steps to get there. For Eglash, those steps begin with thorough collaboration and consideration of Indigenous groups and the knowledge they are willing to contribute.

    The micro, meso, and macroscale refer to three different levels of production that we need to consider. The microscale focuses on the details of labor and other features at the site of production. The mesoscale refers to the point of interface at the organizational level. Finally, the macroscale is about the policies, infrastructure, and cultural dynamics that shape success metrics. As shown, even if one has the best intentions by accumulating more bottom lines to accommodate the microscale, such efforts can quickly be overshadowed at the macroscale.

    Conclusion

    In this essay, we have attempted to illuminate a bridge between what is and what ought to be through a critical analysis of several works documenting the history and potential futures of automation and technological innovation. We traced efforts to deskill labor from piecework in early mechanization through recent efforts to design a “future-proof” smart city. Employing Haraway’s cyborg metaphor, we asked who benefits and who is harmed by technological innovation. We found that elites benefit from such innovation by utilizing technology to optimize efficiency in extracting value from labor, society, and the environment as a whole. We then asked who ought to benefit from such innovation. Drawing on the work of Eglash et al., we argue for a bottom-up approach to the design and implementation of automation technologies that considers each of the three scales of production: 1) the microscale; 2) the mesoscale; 3) the macroscale. This framework emphasizes upskilling rather than deskilling and finds a reasonable middle ground between utopian and dystopian visions to present possibilities for the future of work and automation, grounded in reality.

    REFERENCES

    Benjamin, R. (2021, October 1). Keynote | The New Jim Code? Resisting and Reimagining Tech-Eugenics in the 21st Century. Dismantling Eugenics. https://events.bizzabo.com/aep/agenda/session/628612

    Burrell, J., & Fourcade, M. (2020). The Society of Algorithms. Annual Review of Sociology, 47.

    Clynes, M. E., & Kline, N. S. (1960). Cyborgs and space. Astronautics, 14(9), 26–27.

    Dillahunt, T. R., Garvin, M., Held, M., & Hui, J. (2021). Implications for Supporting Marginalized Job Seekers: Lessons from Employment Centers. ACM Conference on Computer-Supported Cooperative Work and Social Computing.

    Eglash, R., Robert, L., Bennett, A., Robinson, K. P., Lachney, M., & Babbitt, W. (2020). Automation for the artisanal economy: Enhancing the economic and environmental sustainability of crafting professions with human-machine collaboration. Ai & Society, 35(3), 595–609.

    Fleming, P. (2017). The death of homo economicus. University of Chicago Press Economics Books.

    Graeber, D. (2013). On the phenomenon of bullshit jobs: A work rant. Strike Magazine, 3, 1–5.

    Graeber, D. (2018). Bullshit Jobs: A Theory. London: Allen Lane. Penguin Books.

    Gray, M. L., & Suri, S. (2019). Ghost work: How to stop Silicon Valley from building a new global underclass. Eamon Dolan Books.

    Haraway, D. (1991). A Cyborg Manifesto.

    Kalish, E. (2016). Millennials Are the Least Wealthy, but Most Optimistic, Generation. Urban Institute, April.

    Kline, N. S., & Clynes, M. (1961). Drugs, space, and cybernetics: Evolution to cyborgs. Psychophysiological Aspects of Space Flight, 345–371.

    Luckman, S. (2015). Craft and the creative economy. Springer.

    McCord, C., & Becker, C. (2019). Sidewalk and Toronto: Critical Systems Heuristics and the Smart City. ArXiv Preprint ArXiv:1906.02266.

    Noble, D. F. (1978). Social choice in machine design: The case of automatically controlled machine tools, and a challenge for labor. Politics & Society, 8(3–4), 313–347.

    Ocejo, R. E. (2017). Masters of Craft. Princeton University Press.

    Ostrom, E. (2008). Tragedy of the commons. The New Palgrave Dictionary of Economics, 2.

    Reuters. (2010, May 26). Foxconn hit by 10th jumping death; nets installed | Reuters [News]. Reuters. https://www.reuters.com/article/china-foxconn-death/foxconn-hit-by-10th-jumping-death-nets-installed-idUSTOE64P08H20100527

    Roy, D. (2004). 10×-Human-machine symbiosis. BT Technology Journal, 22(4), 121–124.

    Van’t Spijker, A. (2014). The new oil: Using innovative business models to turn data into profit. Technics Publications.

    Wheeler, T. (2018, December 12). Who makes the rules in the new Gilded Age? Brookings. https://www.brookings.edu/research/who-makes-the-rules-in-the-new-gilded-age/

    Winner, L. (1980). Do artifacts have politics? Daedalus, 121–136.

    Wolff, E. N. (2006). The growth of information workers in the US economy, 1950–2000: The role of technological change, computerization, and structural change. Economic Systems Research, 18(3), 221–255.

    APPENDIX A.

    1957 Mechanix Illustrated — You’ll own slaves again — O.O. Binder (see Mara Averick tweet above)


    Originally published at http://mtthwx.com on January 6, 2022.

  • Ethnographic Encounters of the HCI kind in Bioastronautics

    Bioastronautics is a branch of aerospace engineering that specializes in the study and support of life in space. Bioastronautics researchers are interested in the biological, behavioral, medical, and material domains of organisms in spaceflight. Technological advances have increasingly led to a deepened interest and urgency in the domain of space habitat. The goal of NASA’s Artemis Program is to establish a sustainable lunar colony in order to learn how to establish a sustainable colony on Mars. One of the primary objectives in the design and development of new technology to support life in space is the need to develop software that can support astronaut autonomy. This means that for the first time, astronauts themselves have to be able to use these tools to effectively carry out missions safely, without assistance from Ground Control.

    Photo by Adam Miller on Unsplash

    As humans seek to expand out into the solar system, the tools, technologies, and habitats needed to support life in space have to incorporate good HCI principles. How do bioastronautics researchers conceive of user needs, preferences, comforts when designing interfaces and habitats for future spaceflight and habitation? Most bioastronautics researchers will never experience the environment they are designing for, and according to the 2013 evidence report titled, “Risk of Inadequate HCI” issued by NASA, “HCI has rarely been studied in operational spaceflight, and detailed performance data that would support evaluation of HCI have not been collected.” (Holden, Ph.D., Ezer, Ph.D., & Vos, Ph.D., 2013). The report goes on to note the additional concern that potential or real issues related to HCI in past missions have been covered up by virtue of constant contact with Ground Control (Holden, et al., 2013).

    Because of the inability for life as we know it to exist on its own in space, everything used to put humans in spaceflight and habitation is a concern of bioastronautics. Due to the relatively short distance and duration of missions to date, researchers and engineers in bioastronautics have primarily been concerned with human factors associated with hardware and industrial design to ensure these designs were considerate of human physiological capabilities. As technology advances and we push the boundaries of what is possible, a shift in focus to issues related to human-computer interaction is an increasing necessity. While previous space shuttles were typified by hard switches and buttons, astronauts using exploration vehicles will be primarily interacting with glass-based interfaces, software displays and controls (Ezer, 2011).

    According to Holden et al., (2013), inadequate HCI presents a risk that could lead to a wide range of consequences. While there’s an increase in the amount of information necessary to display, the real estate in which to display such information remains limited. Furthermore, as mission distance and length increase, immediate access to ground support will continue to decrease. Meaning that there won’t be a team of experts on the ground prepared to answer questions, solve challenges, and provide workarounds on the fly. As a result, the design of computing and information systems need to take this into account, providing support and just-in-time training when a mission isn’t going according to plan for the autonomous astronaut. In terms of HCI, this means that interfaces must consider environmental and contextual challenges to ensure that interfaces present low cognitive loading and are usable with pressurized gloves, in microgravity, with persistent vibrations (Holden et al., 2013).

    Background

    The term bioastronautics first appears in the literature as a 1962 survey published by Cornell Aeronautical Laboratories, which defines the term as the study of life in space, with the author noting that the discipline is so new that there was hardly time to come up with a name (White, 1962). For context, bioastronautics was born during both the Cold War (1947–1991) as well as the Space Race (1955–1975) between the United States and the Soviet Union. The primary intent behind the discipline is today as it was then, to produce systems and technology capable of supporting and sustaining life in microgravity, and to understand the effects of microgravity on the human body. In this regard, much of the research has centered around medical concerns.

    Definition

    “Bioastronautics encompasses biological, behavioral and medical aspects governing humans and other living organisms in a space flight environment; and includes design of payloads, spacecraft habitats, and life support systems. In short, this focus area spans the study and support of life in space” (UC Boulder Aerospace Engineering Sciences, 2020).

    Main Body

    When space human factors researchers consider mission design and work practices, they are especially considerate of the roles of the various crew members, their physical and mental capabilities and the requirements for life support/space/training (Woolford & Bond, 1999). For twelve days in 2002, computer/cognitive scientist William Clancey led an ethnographic research study as a closed simulation in the Mars Desert Research Station for NASA-Ames Research Center and the Institute for Human and Machine Cognition. The study was a methodological experiment in participant observation and work practice analysis. It gathered qualitative data measuring productivity, a comparison of habitat design, schedules, roles etc, and sought to learn whether or not ethnography could be applied to a closed simulation. Serving as the crew commander, could one also conduct ethnography through participant observation? According to Clancey, one can (Clancey, 2004). In addition to Clancey’s study, there are a number of other simulations for space habitat research such as Stuster’s Bold Endeavors (1996) in a polar environment, The Lunar-Mars Life Support Test Project in a closed chamber, NASA Extreme Environment Mission Operations Project (NEEMO) in an underwater habitat (2004), and BASALT (Biologic Analog Science Associated with Lava Terrains). Analog projects like these are designed to simulate on Earth certain environmental variables to test concepts of operations in regard to hardware, software, and data systems, as well as communication protocols. For these projects, the primary focus is centered around the EVA or extravehicular activity (Beaton, et al., 2019). An EVA astronaut is the one who dons the spacesuit and exits the living quarters to explore, conduct research, or engage in repair tasks. When an astronaut exits the International Space Station to change a battery or make some other upgrade or repair, that’s an EVA.

    With Olson (2010), we get a glimpse into the ecologies and human cosmologies of American astronautics. Through her ethnographic fieldwork conducted primarily at NASA’s Johnson Space Center and submitted for her Ph.D. in Medical Anthropology, Olson argues that ecology and cosmology are co-constituting. Combining participant observation with archival data, Olson is able to evaluate how astronautics practitioners come to know and work with the “human environment”. This work served to highlight how astronautics was connected to a broader array of environmental science and technology (Olson, 2010). What does it mean to be sociopolitical, technoscientific, symbolic and transcendental? With this, Olson is asking what role astronautics has in making ecological knowledge, and how it can inform and make concepts like adaptation and evolution scalable.

    In an article published the same year, Olson (2010) argues that in extreme environments such as outer space, “the concept of environment cannot be bracketed out from life processes; as a result, investments of power and knowledge shift from life itself to the sites of interface among living things, technologies, and environments” (Olson, 2010).

    Gaps

    While there have been a few attempts to conduct ethnography in mission and environmental simulation, none of these attempts had a focus on human-computer interaction. Similarly, while Olson’s ethnography focused on NASA researchers, the purpose of this work was to inform medical anthropology. Like Olson, I contend that with advancing technology, it becomes more clear how life, technology, and the environment are interrelated. As a result, human-computer interaction is a central facet of successful mission planning and execution for the autonomous astronaut. It is, therefore, crucial to understand how researchers interested in the bioastronautics of spaceflight and habitation conceive of human-computer interaction, and user needs/preferences/comforts.

    Bibliography

    Beaton, K., Chappell, S., Abercromby, A., Miller, M., Nawotniak, S. K., Brady, A., . . . Lim, D. (2019). Assessing the Acceptability of Science Operations Concepts and the Level of Mission Enhancement of Capabilities for Human Mars Exploration Extravehicular Activity. Astrobiology, 19(3), 321–346.

    Clancey, W. J. (2004). Participant Observation of a Mars Surface Habitat Mission. Moffett Field, CA: NASA-Ames Research Center.

    Ezer, N. (2011). Human interaction within the “Glass cockpit”: Human Engineering of Orion display formats. Proceedings from the 18th IAA Human in Space Symposium (#2324). Houston, TX.: International Academy of Astronautics.

    Holden, Ph.D., K., Ezer, Ph.D., N., & Vos, Ph.D., G. (2013). Evidence Report: Risk of Inadequate Human-Computer Interaction. Human Research Program: Space Human Factors and Habitability, 1–46.

    Olson, V. A. (2010). American Extreme: An Ethnography of Astronautical Visions and Ecologies. Ann Arbor, MI: UMI Dissertation Publishing.

    Olson, V. A. (2010). The Ecobiopolitics of Space Biomedicine. Medical Anthropology, 170–193.

    UC Boulder Aerospace Engineering Sciences. (2020, 04 13). Bioastronautics. Retrieved from University of Colorado Boulder: https://www.colorado.edu/bioastronautics/

    White, W. J. (1961–62). A Survey of Bioastronautics. Buffalo, NY: Cornell Aeronautical Laboratory.

    Woolford, B., & Bond, R. (1999). Human factors of crewed spaceflight. In W. Larson, & L. Pranke, Human Spaceflight: Mission Analysis and Design (pp. 133–153). New York: McGraw-Hill.

  • The UX of Bioastronautics

    Bioastronautics is a focus area of aerospace engineering that specializes in the study and support of life in space. This area of research spans the biological, behavioral, medical and material domains of living organisms in spaceflight. Increasingly, it’s also being applied to space habitat environments. And while the body of research spans decades, there is little information available regarding the user experience. I’d like to change that.

    Artistic rendition of Space Station Freedom with the STS Orbiter Vehicle
    Space Exploration Initiative — Wikipedia

    Up until recently, the emphasis has been on pushing the bounds of what’s technologically possible and making it work. And to a large extent, this will continue to be true. However, we are on the precipice of a new frontier in which bioastronautics is open to the input of user experience research and design. To optimize the design for the users rather than train the users on how to use the design.

    Below I’ve outlined several gaps in HCI research related to bioastronautics that NASA has identified as presenting a risk to astronauts.

    From NASA’s 2013 Evidence Report: Risk of Inadequate HCI, research gaps include:

    • Methods for improving human-centered design activities and processes
    • Tools to improve HCI, information presentation/acquisition/processing, and decision making for a highly autonomous environment
    • Tools, methods, and metrics which support the allocation of attention and multitasking for individuals and teams
    • Validation methods for human performance models

    Evidence collected in this report details contributing factors that are pertinent for the investigation by the HCI researcher. These include:

    • Requirements, policies, and design processes
    • Informational resources/support
    • Allocation of attention
    • Cognitive overload
    • Environmentally induced perceptual changes
    • Misperception/misinterpretation of the displayed information
    • Spatial disorientation
    • Design of displays and controls

    I’m a graduate student studying Information Science at the University of Michigan and the Usability Testing Coordinator for CLAWS (Collaborative Lab for Advancing Work in Space). My role is as a UX/UI specialist involved in the research and design of ATLAS (Augmented Toolkit for Lunar Astronauts and Scientists) to compete in NASA design challenges, SUITS, and M2M X-Hab.

    Bioastronautics research is still primarily engaged with human factors research dedicated to hardware and industrial design. The application of HCI is lacking, which is why the CLAWS team began actively recruiting from UMSI. The bulk of the team is composed of aerospace, mechanical and industrial engineering, as well as computer science majors.

    To implement the human-centered design strategy, I would start by conducting an ethnographic study through participant observation and contextual inquiry with my team to better understand the culture of bioastronautics. Placing more emphasis on HITL as simulated usability testing, I’ll be seeking to validate our methods both in the BLiSS lab and remotely. Due to the COVID-19 pandemic and self-isolation, we’ve had to scrap my HITL plan and I’m currently in the process of adapting a prototype in XD for remote usability and heuristic testing. Below is a cursory view of the design.

    https://xd.adobe.com/view/482cc044-b8d9-4893-40e6-4b75514adf7f-3e1d/

    Interestingly, our self-isolation presents an opportunity to better understand the sort of issues astronauts will face in space. After all, astronauts on the Moon cannot conduct in-person meetings with ground control. This is specifically one of the target opportunities for HCI concerning the bioastronautics of space travel and exploration. Astronauts on future EVA missions will not be in constant contact with ground control as they have been up to now. Information systems, therefore, need to be designed to maximize autonomy and optimize information processing while simultaneously reducing cognitive load.

    A pertinent example is the GeoNotes protocol we are currently working on. The Artemis generation astronauts are not geologists, save one. But they still need to be able to conduct high-quality lunar sampling and take sufficient field notes for planetary scientists back on Earth, so our task has been to design a geological sampling protocol that supports the needs of the Earth-based scientists as well as the autonomous astronaut.

    Astronauts are cyborgs. They are the people for whom the term was coined. “For the exogenously extended organization complex functioning as an integrated homeostatic system unconsciously, we propose the term ‘Cyborg’.” — Manfred E. Clynes and Nathan S. Kline

    I come from a background in Anthropology. Four field Anthropology. This is the common format of American Anthropology, and it proposes holism through an equal understanding of a person and groups of people by researching humans through biological, cultural, linguistic, and archaeological, or material contexts. What initially drew me to the field of Information is first and foremost, the interdisciplinary approach. Drawing on my background in Anthropology, I have a penchant for synthesis. Next, I came across a TedTalk by Amber Case, “We are all cyborgs now.”

    Amber’s argument is that because we are storing whole swathes of our brains, creating alternate identities, and communicating with each other through digital technologies, we are all cyborgs now. I also hold this view.

    Everything humans do regarding actually leaving Earth’s atmosphere and spending increasing lengths of time in space or on extraterrestrial bodies is in the realm of bioastronautics. All of that technology, from spacesuits to the shuttle, is concerned with supporting life in space. The body of research into the topic thus far has primarily centered around hardware and industrial or mechanical design and engineering. Increasingly, an emphasis on HCI needs to be made to close research gaps identified by NASA and provide adequate UX to end-users as humans seek to spread out and begin colonizing our solar system.

  • Case Study: Contextual Inquiry in the Grandmont Rosedale’s Vacant Property Task Force

    Image of Grandmont-Rosedale Development Corporation landing page
    GRDC landing page

    “Interwalla” is made up of 4 UX professionals from the University of Michigan’s School of Information MSI program. Their names are: Joanne Kim, Tianyue Yang (Maggie), Marcus Thomas, and Matthew Garvin (me)

    Executive Summary

    The Grandmont Rosedale Development Corporation (GRDC) serves to preserve and revitalize the Grandmont Rosedale communities of northwest Detroit through a wide range of community engagement programs. One of these programs is the volunteer Vacant Property Task Force (VPTF). The VPTF works with community members and external organizations to make sure that vacant properties in the GRDC’s neighborhoods are being maintained. However, the process by which the VPTF members document and report their work is unstructured and undocumented, lacking formal procedures. Interwalla’s objective was to examine the ways in which the VPTF currently research and report vacant property and make recommendations for better documentation procedures that the GRDC can adopt for the VPTF. This report details our research methodology, findings, and recommendations regarding these procedures.

    Interwalla conducted background research and used the contextual inquiry method to uncover key information about the VPTF’s work process, including information about resources used and a sense of collaboration within the task force. We gathered data through interviews with six VPTF members as well as the GRDC’s community engagement manager, then analyzed our data to produce high-level findings. Some of these findings include:

    • The VPTF members complete much of their work individually and thus use a variety of resources, methods, and tools to complete their work.
    • The GRDC and the VPTF value themselves on their strong sense of community and value the community influence and impact they have achieved.
    • The members see the VPTF as a group that can eventually disappear as there become less vacant properties; however, the task force seeks new members to become aware of and involved in the team.

    With these findings, we make the following recommendations:

    • An elevated digital presence through a VPTF webpage on the GRDC website
    • Collaborate on an updated Vacant Property Toolbox handbook
    • Improve fundraising efforts by using online crowdfunding
    • Improve collaboration within the VPTF through co-design strategies

    Introduction

    Grandmont Rosedale Development Corporation

    The Grandmont Rosedale Development Corporation (GRDC) is a non-profit, community-based organization working to preserve and improve the Grandmont Rosedale Neighborhoods of northwest Detroit. For the past 30 years, the GRDC has taken a comprehensive approach to community revitalization, with programs designed to renovate vacant homes, assist local homeowners and businesses, beautify the community and keep their neighborhoods safe and vibrant.

    The Vacant Property Task Force

    The Vacant Property Task Force (VPTF) is one such program that works with community members, meeting regularly to strategize ways to combat property vacancy and blight. The VPTF is comprised of volunteer residents from the five GRDC communities. Members of the VPTF monitor vacant homes in Grandmont Rosedale to ensure that every property is being maintained. Much of their work involves tracking down property owners, reporting vacant homes to the city, and assisting homeowners who are facing tax and blight issues. Members also make sure that vacant homes are being physically maintained by performing tasks such as cleaning the yard and cutting the grass for the vacant homes.

    Project Goal

    While the VPTF works with community members to make sure that vacant properties are being maintained, the process by which they complete this work is unstructured, lacking formal practice and procedures. Information is maintained mostly through word of mouth. Some of the steps require the submission of information through city websites and apps. And while some of the members are tech-savvy, others struggle with these technologies. To this end, Interwalla conducted research and analysis through contextual inquiry to analyze the current process, suggest improvements, and make recommendations for optimizing documenting procedures that the GRDC can share with task force members and the general public.

    Background

    The Vacant Property Task Force, or VPTF, is one of the nation’s most effective neighborhood volunteer organizations, working to preserve and improve the Grandmont Rosedale community of neighborhoods in Northwest Detroit. More impressive still, is the underlying fact that the VPTF was founded in response to the housing market crash of ’07, and the founding members had no experience to guide them through these turbulent times. They just rolled up their sleeves and got to work. Over the years, there has been a reduced need for the task force, which in part serves to speak volumes on their impact and effectiveness. And yet, always looming on the horizon is the threat of another economic downturn.

    Our research has shown that as the years go by, new volunteers are few and far between. Who wants to join a vacant property task force if it doesn’t feel like a pressing issue? Further compounding this fact, are barriers to entry. Some of the more seasoned volunteers don’t respect the input from newer volunteers who weren’t around when the issues addressed by the VPTF were at its peak. In some cases, instead of passing on the knowledge and experience, they have accrued over the past decade; the more experienced volunteers often prefer to continue to do the work themselves rather than explain how to do it to someone else.

    The challenge presented by the GRDC is to optimize documenting procedures so that if and when another economic crisis affects their community, volunteers can be quickly onboarded and mobilized to educate and protect the community from tax foreclosure and the encroaching vacant property and blight issues it brings with it. In the meantime, a resource guide is sought to provide useful tips and guidelines on how the general public can carry out some of this work on their own.

    Utilizing data from our background research and contextual interviews, Interwalla constructed an affinity wall to better understand the synergy throughout seemingly disparate pieces of information, to find the common thread that binds every one of the stakeholders we interviewed not only to their neighborhood but to each other. In this respect, our team’s mission has been to provide subtle, yet high impact information solutions, that if implemented, could have significant positive reverberations throughout the entire GRDC.

    Methodological Overview

    Contextual Inquiry

    Interwalla followed the user-centered design processes primarily utilizing contextual inquiry (Holtzblatt, Wendell, & Wood, 2005). Contextual inquiry is a semi-structured interviewing methodology used to obtain information about the context of use. Users are first typically asked a set of questions, followed by observations and further questioning as they work in their own environments (Herzon, DeBoard, Wilson, & Bevan, 2010).

    Because of the nature of the VPTF work, or more specifically a lack thereof, Interwalla adapted the process and conducted a more expansive standardized interview in which we had users walk us through specific recent experiences to make up for our inability to directly observe the work process. Our aim was to gather rich detail about work practices as well as social, technical, physical environments, and user tools. Contextual inquiry is based on a set of principles that make it adaptable for a range of different situations. This technique is generally used at the beginning of the design process and is a reliable method for gathering the kind of information we sought.

    According to Herzon et. al., the four principles of contextual inquiry are:

    • Focus — Plan for the inquiry, based on a clear understanding of overall purpose.
    • Context — Go to the user’s environment and observe them do their work.
    • Partnership — Engage with the users to reveal unarticulated aspects of work.
    • Interpretation — Arrive at a shared understanding with the users about the aspects of work that matter.

    Contextual inquiry is most useful in defining requirements, process improvement, learning what’s most important to those involved, and informing future projects.

    Background Research

    In order to achieve focus and plan for the inquiry, each member of Interwalla conducted distinct background research to establish a generalized profile regarding the problem, the client, the sector, and organizational issues as they pertain to the implementation of information systems. This background research was crucial in informing our team before heading into interviews and observations to gather context.

    Participant Observation

    Matt conducted a participant observation session as a representative of Interwalla at the VPTF monthly meeting held on October 15th. Participant observation is a qualitative method with roots in traditional ethnographic research. Participant observation is precisely what it sounds like, the researcher not only observes the activity but they also participate right alongside the group they are observing. This method builds trust and adds depth to the researcher’s insights while clarifying observer bias through self-reflection.

    Contextual Interviews

    Our interview participants were selected with assistance from our client. We were provided with six individual stakeholders and sat down with the Community Engagement Manager for a total of seven individual interviews. Although the VPTF as a volunteer organization officially has a flat hierarchy, meaning each member has no authority over another, we were presented with a range of subjects from founding members to newer members, the VPTF “Chair”, two members of the GRDC board of directors, and the Community Engagement manager. This range of stakeholders provided Interwalla with a significant cross-section of roles within the program, and their relationship to the greater organization, yielding representational insights and adding depth to our inquiry.

    The interviews themselves focused on three primary topics. We endeavored to learn, from each stakeholder’s perspective, about the task force, the tasks, and the environment in which these occur. Being that the Grandmont Rosedale community of neighborhoods is comprised of five distinct neighborhoods, we sought to learn more about these neighborhoods and the community directly from the residents who have made a commitment to their preservation.

    Artifact Survey

    Pertinent to our research was a survey of used artifacts, both physical and digital. In the client brief we learned that while some of the task force members are tech-savvy, others struggle with digital technologies. We were also made aware that there were communication and organizational gaps, as well as tensions between some of the long-standing members and newer members with new ideas. Any viable recommendation on our part had to consider what kind of tools and technologies each individual user was familiar with, and the extent to which they could benefit from the digital solutions we had to offer. Moreover, we collected a trove of documents that served as previous, less formal incarnations of the type of guide the GRDC is seeking help with creating.

    Affinity Wall

    The affinity wall was our primary source for data analysis. It was derived from the KJ Method developed by Japanese ethnologist, Jiro Kawakita. This method was developed in response to difficulties assembling complex ethnographic data into a coherent story yielding insights into the people the research was being conducted on (Scupin, 1997).

    As a team, we broke down the interviews in single “affinity notes”, then poured over them looking for meaningful clusters. As we put these together, we came up with a sentence to describe the common thread that made these clusters meaningful and put these on blue sticky notes. Then we studied the blue notes closely and where we found meaningful clusters, we labeled an orange note with a description of the common thread. We found some of these orange notes also had and common thread and we thus labeled a green note with the overarching similarity between them. In this manner, we assembled something of a pyramid which tells the tale of the GRDC, the VPTF, and the community in which they reside and serve.

    The completed affinity wall

    Findings and Recommendations

    Overview

    We derived several important findings through our background research, artifact survey, and affinity wall analysis. The VPTF has been so effective that it is on the verge of dissolution. And yet the members of the VPTF and their experience have become integral to the past, present, and future work of the GRDC, that a transformation of volunteer roles may be in order as the VPTF revises their mission.

    Our goal was to analyze documenting procedures and provide recommendations for optimizing these procedures to carry the processes and experience the early members had developed, into the future. Our research suggests that the VPTF should focus on the deployment of a website and updated vacant property toolbox in order to document and preserve the processes of the VPTF, updating as needed as a continued resource and model for new volunteers, the general public, and other communities.

    The VPTF Needs an Elevated Digital Presence

    GRDC Homepage

    While the GRDC operates a website, programs like the VPTF get little exposure as they cannot be seen on the header menu, and info related to these programs cannot be found until scrolling halfway down the homepage. However, contact us not only appears in the header menu, but also center stage of the initial loading screen. Our primary finding is that the VPTF Program needs a webpage. From the initial client brief, first meeting, and through the stakeholder interviews and affinity wall analysis, what we learned is that the VPTF appears ready to create a home for itself on the internet.

    Evidence:

    • At the first meeting with the client, Interwalla was presented with a number of pamphlets and flyers from over the years that were used to distribute to neighbors and new residents. Several stakeholders referencing these artifacts suggested that they see an updated version of these documents as a website.
    • According to our interviews along with recent events in the news, we were made aware of the interest in using VPTF processes as a model to roll out in other Detroit communities.
    • With less need for work to be done reporting vacant property, the work of the VPTF has become documenting the processes and optimizing the format so that it can be utilized by newcomers.

    Recommendation: VPTF webpage

    When we go to grandmontrosedale.com we are presented with a responsive, and well- designed website that looks great on mobile and computer web browsers. What it’s missing are pages for the various programs facilitated by the GRDC. Given the objective of the project, combined with the interest to roll out the VPTF’s efforts as a model for the rest of the city, Interwalla finds this recommendation pertinent to elevating VPTF awareness and accessibility amongst the general public.

    In this case, a webpage would also serve as a living resource and archive of past and current documentation of processes and guidelines utilized by the VPTF. The GRDC website already has a templated design, hosted on WordPress. That means that a lot of the work is already done and adding a new page to the site should be relatively easy for the site’s webmaster, Loudbaby. For a VPTF webpage, volunteers should come together and collaborate on what content it should contain. In addition, we devised a means of increasing the visibility of GRDC programs.

    Increase GRDC programs visibility and notoriety by giving them their own webpage on the GRDC site
    GRDC Homepage mobile

    There is a caveat here, in that on a mobile browser, only the Support Us button is visible:

    This is important to consider, because a mobile browser is how most people access and view the internet. To account for this, Interwalla advises either a) making the contact button dominant over support and adding a donation banner somewhere across the top or if possible,

    b) making both buttons visible on mobile devices.

    An Updated Vacant Property Guide

    The primary concern the GRDC presented us with was the need for an updated guide. This was repeated numerous times throughout the course of the interviews. As part of our artifact survey, we took into consideration the previous incarnations the VPTF had created over the years. While a webpage could serve as a digital VPTF guide that the GRDC is seeking, we found that this digital solution would be of little benefit to those who aren’t as tech-savvy.

    Furthermore, a physical guide is something that can be distributed to new residents and volunteers, passed out at community events, or utilized by other neighborhood associations.

    Evidence:

    • During our first meeting with the GRDC, Interwalla was presented with several resources and bulletins used by the VPTF members. We learned that most of these resources were not only undated but outdated as well.
    • A common resource used by many members of the VPTF is the Vacant Property Toolkit handbook provided by Detroit Community Resources and the Detroit Vacant Property Campaign in association with the University of Michigan Taubman College of Architecture and Urban Planning. This resource is also outdated and includes names and contact information that are not valid anymore.
    • Some members of the VPTF do not use digital technology.
    • Volunteers expressed a desire to have a guide to distribute to new residents.

    Recommendation: A GRDC branded Vacant Property Toolkit (long-term)

    We recommend that the VPTF work collaboratively to establish branded physical guides. In web design, we structure information formatted for mobile viewing first, as that is how most people will see it. This causes designers to establish an information hierarchy, thinking hard about what content is most important for people to access and proceeding from there. We found that this same approach would prove useful for updating printed materials to be used as a guide and resource manual.

    In the short term — A Business Card:

    Business card mockup with key links on back

    As an example, we have included a mockup of a business card that serves as a physical manifestation of the kind of mobile-first design considerations described above. The card serves as a quick and easily distributed resource that shares primary contact information and links to the tools and resources the VPTF makes the most use of. A card such as this can be distributed to new residents, displayed at local businesses, passed out at community events, or carried by volunteers and handed out at a moment’s notice should the need arise.

    In the short term — A Brochure:

    Expanding on the information contained on the card, a trifold brochure could be printed and distributed in much the same fashion. The idea here is that the brochure would contain the same information as the card with the addition of the next most important information as determined by the VPTF. Referring to our web design analogy, a brochure is akin to designing for a tablet.

    For the long term — An Updated Vacant Property Toolbox:

    Further expanding on the information contained in the brochure, we arrive at an updated toolbox. This toolbox need not be created from scratch. Our recommendation is to review previous incarnations of the Vacant Property Toolbox to determine what information is outdated, and what is still salient. The task force is comprised of members who know what information is useful and what can be discarded. The toolbox should be more expansive and provide a level of detail on par with a training manual. As always, we should consider the audience we are writing for. The focus of the toolbox should not only document the methods and resources used by the VPTF, but also what the VPTF does to make their contributions to the community so effective. In coordination with our previous recommendation, we advise that a digital version of the toolbox be preserved so that new editions don’t have to be created from scratch, but rather can be revised as time goes on. Outdated versions can still be located as an archive on the VPTF webpage, and current versions can be viewed on the web, or downloaded as a pdf document and printed out for distribution or personal use. There is an advantage to this undertaking, in that the GRDC can make use of company branding for greater exposure, further elevating their status and the impact of the VPTF’s work, particularly as the toolbox is utilized in other Detroit neighborhoods and beyond.

    The VPTF Can Improve Fundraising Efforts

    VPTF Fundraising is central to strategic planning at the GRDC, but many of the past fundraising projects have been a high effort for little to no return.

    Evidence:

    • The VPTF meeting on October 15th focused primarily on soliciting ideas from members regarding fundraising. We learned that while the VPTF is central to strategic plans at the GRDC, many of the fundraising efforts in the past have been a high effort for little return.
    • One member mentioned that often residents would prefer to just give money to the organization rather than buy something like a holiday wreath, which was still the most successful fundraiser to date.
    • Another member suggested a holiday movie screening. Another idea was a mobile display that could be set up at various community events, staffed by volunteers, and used to simply ask for donations.
    • Some of the stakeholders confirmed in their interviews that much of the VPTF work lately is fundraising in order to buy supplies like boards and equipment. Additional commentary reiterated that the VPTF fundraisers have typically been a lot of work for little return and that every year goes by, everyone gets older. The VPTF is looking for more efficient, less physical ways to raise money.
    • In addition, in recent years there has been less need for boarding up houses, and more of the work is just researching and reporting vacant property or code violations either to the owner or management firm, and then following up with the city if necessary to issue a ticket to spur action.

    Recommendation: Crowdfunding

    “Crowdfunding is the best way to expand a nonprofit’s donor base.”

    Stats courtesy of https://www.mobilecause.com/crowdfunding-for- nonprofits/

    This recommendation was mentioned in the VPTF meeting on 10.15, and Interwalla agrees that this would be the best move going forward in order to optimize fundraising efforts utilizing information technology. Crowdfunding leverages the power of social networking by engaging connections made not only by the organization but also the connections of the organization’s members.

    What sets crowdfunding apart from more traditional donation pages is the more personalized touch. Crowdfunding often comes with pictures or short videos that highlight the impact of the organization or in some instances, illustrate the problem that needs solving.

    Crowdfunding is the best way to expand a nonprofit’s donor base. Similarly, peer to- peer fundraising may also be utilized. This is essentially the same as crowdfunding, but it puts the volunteers in control of producing and promoting the campaign, which can either be ongoing or have a set deadline/amount.

    The VPTF Can Improve Collaboration

    Any team or organization can benefit from improved communication and collaboration, and the VPTF is no different. In our recommendations above, we elaborated on our key findings and provided recommendations on what the GRDC and VPTF could do to optimize documentation and reporting procedures. What follows is a recommendation for how. The affinity wall analysis alone yielded rich data and several key findings.

    Evidence:

    • First, Matt observed communication and collaboration issues at the VPTF meeting despite concerted efforts to manage and facilitate discussion and solicit ideas.
    • Some members talk over others and don’t respect other people’s ideas.

    From the stakeholder interviews and affinity wall analysis we learned:

    • Most of the traditional VPTF work can be conducted alone or with a partner.
    • Overwhelmingly, the most consistent sentiment among stakeholders was a sense of pride in the impact the VPTF has had on the community and the strong sense of identity that came with being a part of that.
    • While everyone agreed that the overarching mission of the VPTF was to not be necessary anymore, there was a consistently expressed desire to perhaps reevaluate the mission of the group rather than dissolve it altogether.
    • Divergent ideas about where that might lead followed.
    diagram of the design thinking process
    Design thinking is an iterative approach to problem-solving

    Recommendation: Co- design

    You don’t have to be a designer to benefit from design thinking. Design thinking strategies are highly effective problem- solving strategies that increasingly, are being employed with great success in a range of industries both in the public and private sectors, in particular nonprofits.

    For more information, see: https://www.nten.org/article/design-thinking-a-powerful-tool-for-your-nonprofit-0/

    Co-design is very similar to participatory design, which advocates for changing not merely the systems, but the practices of system-design and building, to support democratic values at all stages of the process. “From participatory design, we draw several core principles, the reflexive recognition of the politics of design practice and a desire to speak to the needs of multiple constituencies in the process” (Sengers et al., 2005).

    Co-design differs from participatory design in that it asserts that users can design the solution for themselves. There are two potential pitfalls in this approach that any organization adopting a co-design process needs to be aware of.

    1. When the designer falls back into more of a support role, the result is often design by committee. To counter this, clear leadership is required in order to keep focus and make tough, holistic design decisions.
    2. If co-design is used as research, while quite effective, it’s research and not co-design.

    Users can, however, participate in and employ the iterative strategies designers use to come up with and build on ideas, and co-design is a strategy that is proven to work at scale, from international campaigns, to open source projects, and even in small team environments and agency work (Casali, 2013).

    In fact, the GRDC building has a fantastic space with which to facilitate neighborhood workshops (something GRDC already does) or design jams to ideate innovative solutions to neighborhood problems. In this respect, the GRDC would be utilizing community involvement through resident and volunteer participation that strengthens the social fabric of the neighborhoods and reinforces the GRDC as the premier conglomeration of the Grandmont Rosedale community at large. The neighborhood organization works best when residents and volunteers seek to maximize interdependence and participation within the community (Huggins, 2002). While a participatory design strategy was beyond the purview of our project, it is our recommendation that a GRDC facilitated co-design process is implemented in the long term as a means of community-based input in the ongoing design process, implementation, and management of data and information systems now and in the future.

    “How Might We?”

    IDF provides world-class free resources on design thinking

    You may be asking, “Great, but how might we get started?” (Get it?) The good news is, you’ve already begun. Design Thinking sounds new and different, but it’s simply a set of iterative techniques that you can employ in any team environment to achieve better results by putting the structure of the problem-solving methods used by innovative organizations around the world to work for you.

    In the design process, we do research in order to understand user needs and define the problem. In co-design, because the users are the designers and the problems the organization wants to focus on are already defined, Interwalla recommends beginning with, “How might we…?” This is a simple, yet powerful rephrasing of the established problem that opens the floor to explore a range of possibilities uncovered in the ideation phase.

    • How: We ask “how” because we don’t yet have the answers we seek. Beginning with “how” helps participants explore a variety of possibilities instead of diving straight into what we think the solution should be.
    • Might: The usage of “might” is important as it emphasizes that our ideas are only possible solutions and that we shouldn’t be too attached to the initial ideas that spring to mind.
    • We: “We” is critical to the overall co-design strategy as it immediately implies and reinforces that this is a collaborative effort and that the solution will be found through teamwork.

    According to the Interaction Design Foundation (IDF):

    “How Might We” (HMW) questions are the best way to open brainstorm and other ideation sessions where you explore ideas that can help you solve your problem. By framing your problem as HMW questions, you’ll prepare yourself for an innovative solution.

    For more information please see: https://www.interaction-design.org/literature/article/define-and-frame-your- design-challenge-by-creating-your-point-of-view-and-ask-how-might-we

    Brainstorming

    Good brainstorming is at the heart of innovation

    Brainstorming is a well-known and commonly used activity employed by teams within organizations to generate a bunch of ideas to solve a problem. But a lot of brainstorming sessions are unstructured and ultimately fail to achieve the optimal results.

    Brainstorming is a useful tool at any point in a design or work process and is often utilized throughout. As an example, for this project, we brainstormed interview questions and used the interview data to brainstorm problem statements in order to brainstorm ‘how might we’ questions that we then brainstormed answers to.

    Often referred to as the “double diamond” this pattern of diverge-converge will often repeat several times over the course of a project.

    At any stage of the design thinking process above, when you need to generate ideas to solve a problem or challenge, the goal should be to generate many ideas that diverge from one another. You then take these ideas as Michelangelo takes a block of marble and whittle away at the superfluous pieces until you reveal the masterpiece hidden inside.

    We recommend the co-design strategy as a means of optimizing a website and guidebook content while reinforcing group cohesion and community interdependence. Soliciting ideas from VPTF volunteers are a major part of team meetings. Taking a closer look at how to develop HMW questions and brainstorming sessions can make these meetings more productive and fruitful.

    Over the years, some of the most innovative design thinking experts from the world-famous IDEO and Stanford’s d.school have developed best practices8 that GRDC can implement to provide structure to ideation sessions and meetings including but not limited to the VPTF.

    • Set a time limit

    It’s important to set aside a specific period in which everyone in the group operates in brainstorm mode.

    The facilitator needs to stress and enforce the importance of prohibiting judgment and keep the group focus on generating as many ideas as possible.

    The worst possible ideas are encouraged!

    • Stay focused on the topic

    Brainstorming should always address a specific question.

    Attempting to address multiple questions in a single session doesn’t work.

    How Might We…? tend to be the best questions.

    • Defer judgment or criticism, including non-verbal

    Brainstorming sessions are not the time to judge or criticize.

    It’s crucial that participants feel confident and safe to put forward wild ideas.

    The best ideas come from those who dare to be different.

    • Encourage weird, wacky, and wild ideas

    At best, you get an incredibly innovative solution.

    At worst, you get an idea you don’t use.

    Wild ideas often give rise to creative leaps.

    • Aim for quantity

    The more ideas, the better chance you have to innovate.

    • Build on each other’s ideas

    Brainstorming works well when participants build on each other’s ideas.

    Our minds are highly associative, and one thought can trigger another.

    Building on each other’s ideas helps participants get out of their own thinking structures when they can’t come up with anything else.

    • Be visual

    At UMSI, we are particularly fond of Post-Its.

    Use sketching or models to visualize your idea.

    Act it out. (bodystorming)

    • One conversation at a time

    You’re here to ideate together.

    Don’t obsess over your own idea.

    After time expires or all ideas are presented, select the best ideas through various methods like “Post-It Voting”, or “Bingo Selection”.

    You can take the best ideas and build on them in further brainstorming sessions.

    For more information please see: https://www.interaction-design.org/literature/topics/brainstorming

    Conclusion

    Our research and analysis yielded key information regarding the VPTF’s procedures, concerns, and team collaboration. We translated these findings into clear, executable recommendations that the GRDC can implement both short term and long term.

    In conclusion, our recommendations address most of our key findings. The VPTF web page provides the task force with an elevated digital presence and a hub for past and present documentation by the members. This is pertinent given the VPTF’s desire for greater publicity and new member involvement. Given VPTF’s history of fundraising efforts, we suggest crowdfunding as an effective way to expand VPTF’s donor base and improve fundraising overall. Due to expressed desire to revisit and perhaps change the VPTF’s mission, as well as a strong sense of community and identity within the VPTF, we suggest co-design strategies foster greater collaboration in the team’s decision-making process. Lastly, we suggest that these strategies be used to optimize a team-wide effort to update the VPTF’s outdated documentation. We hope that our recommendations will meet the needs and concerns of the Vacant Property Task Force and fit within the larger mission of the Grandmont Rosedale Development Corporation.

    Bibliography

    Battersby, L. (2017). Co-creation Methods: Informing Technology Solutions for Older Adults. Human Aspects of IT for the Aged Population: Aging, Design and User Experience, 77–89.

    Goddeeris, A. (2014). Securing Neighborhoods. Agora Journal of Urban Planning and Design, 110–118.

    Herzon, C., DeBoard, D., Wilson, C., & Bevan, N. (2010, 01). Contextual Inquiry. Retrieved from Usability Body of Knowledge: http://usabilitybok.org/contextual-inquiry

    Heugens, P., & Drees, J. (2013). Synthesizing and Extending Resource Dependence Theory: A Meta-Analysis. Journal of Management, 1666–1698.

    Holtzblatt, K., Wendell, J. B., & Wood, S. (2005). Rapid contextual design a how-to guide to key techniques for user-centered design. San Francisco, CA: Elsevier/Morgan Kaufmann.

    Huggins, M. (2002). Volunteer Participation in Urban Neighborhood Organizations: An Exploration of Individual and Contextual Characteristics. East Lansing: Michigan State University Dept. of Resource Development.

    Massey, P. A. (2019, 06 04). City hires new director to help close Detroit’s digital divide. Michigan Chronicle, 82(38), B6. Retrieved from https://proxy.lib.umich.edu/login?url=https://search-proquest- com.proxy.lib.umich.edu/docview/2253095182?accountid=14667

    Nonprofit Finance Fund. (2019, 11 11). Grandmont Rosedale Development Corporation. Retrieved from GuideStar: https://www-guidestar-org.proxy.lib.umich.edu/profile/38-2885952

    Scupin, R. (1997). The KJ Method: A Technique for Analyzing Data Derived from Japanese Ethnology. Human Organization, 233–237.

    Sengers, P. (2005). Reflective Design. Proc. 4th Decennial Conference on Critical Computing, (pp. 49–58).

    Wastell, D. G. (1999). Learning Dysfunctions in Information Systems Development: Overcoming the Social Defences with Transitional Objects. MIS Quarterly, 23(4), 581–600. Retrieved from https://misq.org/cat-articles/learning-dysfunctions- in-information-systems-development-overcoming-the-social-defenses-with-transitional-objects.html

    Wells, M. A. (2009). Perceptions of Knowledge Gatekeepers: Social Aspects of Information Exchange in an Organization Undergoing Change. Sydney: University of Western Sydney School of Management. Retrieved from https://researchdirect.westernsydney.edu.au/islandora/object/uws:7822/datastream/PDF/download/citation.pdf

  • What about Personas?

    As we were going over Personas in my Interaction Design course at UMSI, I began seeing some articles on the topic that I wanted to share with the class.

    Kill your Personas — Microsoft Design

    Stop obsessing over user personas — UX Collective

    The discussion we had also correlates with an issue I’m having with the MacLean et al., reading. While I overall found Design Space Analysis highly informative and useful for the design process, I’m hung up on QOC as argument based. As interdisciplinary as Design Thinking is, when we justify our decisions through arguing for our rationale rather than proof, we end up in effect making excuses for what we did based on our own internal logic.

    Models are only useful until they aren’t. Models, analogies, metaphors and the like, are kind of like stents that force a communication channel open to cram more information through than that channel could withstand otherwise. The experts who develop the models have a better understanding of that model’s limits and drawbacks than the person who is introduced to the concept through the model. So we need to really hone our instincts so we know when to break our own rules. A good recent example of this is the information processing model we spent the first half of the semester in 588 learning. Everything about vision, perception, attention and memory that we just learned in that class was related through this model. But that’s not what our brains look like, how do we know where that model breaks down? How many generations removed are we from the experts that developed it?

    Despite what the rationalists think, logic occurs inside the individual. It’s good that we abstract data to create personas as noted from the readings. But as discussed in the articles above, we tend to ascribe erroneous details to these personas that come from our internal logic rather than the data. Ultimately I think this results in a holistic thinking that’s rather hollow. As Sapolsky notes in his tome on human behavior, rationalism is most often rationalizing away violence as just part of human nature. We aren’t wired for [this], we didn’t evolve for [that]. Neither are we a ‘tabula rasa’ or a clean slate. We are born with an array of biological behavioral propensities that are cultivated through environmental inputs and our reaction to them.

    The Sapir-Whorf hypothesis suggests that the words we use, shape our perceptions of the world. We can only think in terms of the words we know how to think in. When we enter a design process as non-experts we are looking to the user research to drive insight that give us a sense of holistic expertise. When we justify based on arguing rationale rather than proof, we employ rationalism, which essentially holds that whoever wins the argument is right, or at least closer to the truth than those who lost. As they say, history is written by the victors.

    I say all of this because I have a growing concern that the interdisciplinary approach is starting to appear somewhat shallow and self-congratulatory. Like Dr. Malcolm said in Jurassic Park, we were “so preoccupied with whether or not we could, we didn’t stop to think whether or not we should.” Businesses scrutinize every penny and I see a future of tight deadlines and budgetary concerns where we fudge user research and employ our own inner logic to advocate for our own crappy designs while we post inspirationals on Instagram, repeating that saying, “You are not your user.”

    But maybe we should be.

    References:

    “Chapter 5: Structured Findings” in Saffer, D. (2010). Designing for interaction: Creating innovative applications and devices (2nd ed.). Berkeley, CA: New Riders.

    MacLean, A., Young, R. M., Bellotti, V. M. E., & Moran, T. P. (1991). Questions, options, and criteria: Elements of design space analysis. Human-Computer Interaction, 6(3–4), 201–220. (through section 2)

    Case Study: http://vesperapp.co/blog/how-to-make-a-vesper/

    “Chapter 5: Picking the Right Tool” in Warfel, T. Z. (2009). Prototyping: A Practitioner’s Guide. Brooklyn, NY: Rosenfeld Media.

    Chapters 6–11 in Warfel, T. Z. (2009). Prototyping: A Practitioner’s Guide. Brooklyn, NY: Rosenfeld Media.

    GUI Prototyping Tools: http://c2.com/cgi/wiki?GuiPrototypingTools


    Originally published at http://mtthwx.com on November 15, 2019.

  • Resy and OpenTable: a comparative case study

    The goal of this report is three-fold. It seeks to compare and evaluate two competing websites in terms of human-computer interaction with a focus on the information processing model. This model likens our cognitive processes to how a computer works. Using this critique, we will then propose a new design, and justify why this design is an improvement over these two sites (Wickens, Hollands, Banbury & Parasuraman 2015, pp. 3–5).

    image source: https://dataworks-ed.com/blog/2014/07/the-information-processing-model/

    Compare:

    For comparison, this report will consider two users with individual differences on Jakob Nielsen’s (1993) user cube, as shown below.

    Janet is a cohort 2 Baby Boomer and domain expert with minimal computer expertise (Norman, 2008). She’s made her career in hospitality and marketing, beginning her first restaurant position as a hostess in high school. She was responsible for taking reservations the “old-fashioned” way, by phone.

    The “user cube”. J. Nielsen, Usability Engineering

    Janet worked her way up to restaurant manager by her early 30s, and now in her late 50s, she’s the regional manager of a franchise bar and grill. She’s comfortable enough on the computer to complete her tasks, mostly related to work, but spends little time online. Her task is to book a reservation for her and the eight General Managers in her region to celebrate a great quarter. We can imagine she is looking for a reservation for 10/26/2019 from 6–9 pm at a restaurant that serves alcohol and accommodates vegetarian and gluten-free options.

    Earl is a high school senior, Gen Z, preparing for his first date with his new girlfriend on 10/19/2019 for Sweetest Day. As such, he’s ignorant about the domain (both making reservations and dating) but has relatively extensive computer experience. Earl hopes the website will show him a good recommendation for a romantic evening at a restaurant in a teenager’s price range. As we compare these two websites, consider Janet and Earl and their tasks at hand. For them, how do OpenTable and Resy compare?

    We find Janet confused by Resy more than by OpenTable. As she arrives on the homepage, she understands she can click “Detroit” and “Guests” to select her options. There is a downward-facing carat to suggest a dropdown menu once clicked. As she hovers over these menus, she notes that the cursor turns into a hand, which provides immediate feedback, and causes her to begin building a mental model of how the site works.

    She is puzzled by how to select her specific date; the cursor doesn’t change when she hovers over “Today,” and there is no carat to suggest she should click on the word. This is inconsistent with the internal model she is building, as the colored words should suggest “clickable” in conjunction with a hand cursor upon hovering. Here we find a missed opportunity to exploit redundancy, which results in a design that doesn’t immediately support the maximization of automaticity and unitization (Lee, Wickens, Liu & Boyle p. 170). And while this also slowed Earl down, his level of computer expertise and habituation from other websites informs his decision to click anyway to see what, if anything, happens (Johnson, p. 5).

    When Janet goes to “View all Detroit Restaurants” via the search menu, the long list of locations is sorted in no order. The screen is split between the restaurants and a map pointing out all the participating restaurants in the area but with no corresponding information. Even hovering over points on the map yields no new information. Only by clicking on a point will the user see movement in their peripheral vision as the list of restaurants on the left side of the screen moves to bring the selected restaurant to the top of the panel.

    Simultaneously, a pop-out feature displays the selected restaurant’s information in a box over the map pinpointing the user just clicked. Still, Janet initially misses this pop-out information which is overshadowed by the movement in her peripheral vision (Ware, pp. 27–35). Additionally, the scroll bar on the far right of the screen is mapped to the restaurant list on the left, with the map separating the two, a clear failure to design for stimulus-response compatibility (Ritter, Baxter & Churchill, 2014).

    By comparison, Janet has a much easier time figuring out how to navigate OpenTable. The center of OpenTable’s homepage is consumed by the main feature, making a reservation. She can immediately see how to select her chosen date, time, and the number of guests. The “Let’s Go” button is easily recognizable as a button, signifying clickability combined with what Saffer refers to as feed-forward; the button’s label tells the user what will happen before clicking the button (Saffer, 2010, p. 133).

    Clicking on the “Let’s Go” button, she is presented with a long list of restaurants as well as a “Map” button and a variety of options chunked on the left side of the screen, creating meaningful sequences that she can select in order to narrow down her search (Lee et al., p. 177).

    However, upon clicking the button and being taken to the next screen, we find a box featured in the center of the screen labeled “Restaurants with Bonus Points.” What are bonus points? On the top right of the box, we see a link labeled “About Bonus Points,” but even after clicking this link, it is not clear what bonus points are or how they work as we are taken to a new page with a list of articles to sift through to learn more. This disrupts the user and largely distracts them from making a reservation. Now their attention is being spent on information regarding bonus points filling up their working memory (Johnson, pp. 90–94).

    Overall, OpenTable is more consistent in properly detecting and applying the appropriate interactive features to carry out tasks the user wishes to perform. OpenTable offers the map-level view as an option but improves the design by providing a scroll bar right next to the list of restaurants to which the scroll bar is mapped. On the other hand, Earl’s expertise with computers gives him an edge in that he can figure out both sites eventually, albeit initially confused by the Resy interface and found it less intuitive and more difficult to navigate according to his model of how website navigation typically works in line with stimulus-response compatibility (Ritter et al., 2014).

    OpenTable’s design draws the user’s eyes to the center of the screen and keeps them there. It strategically arranges supporting information around the periphery in an easily understandable format, allowing users to quickly perform a visual search supporting pattern building from the bottom up. In contrast, the user’s top-down processes reinforce relevant information (Ware, pp. 8–17).

    Resy is arranged to be viewed left to right and top to bottom, but it doesn’t lend itself well to a clear use as the user’s eyes scan over the menu of cities to select from, even though the website has already detected the user’s location. The elements involved in initiating the search and booking a reservation are less distinct from the rest of the page and blend in somewhat with the white space across the header (Lee et al., p. 109).

    Viewing all restaurants focus the eyes on the map, which presents no further information aside from an array of pinpoints. In contrast, more prevalent information is situated around the screen’s periphery.

    How does the user determine what pinpoint they should bother clicking on? If they have clicked on a few points already, how can they tell which points were already clicked? Rather than supporting user recognition of where they’ve already clicked, Resy forces the users to recall it for themselves, which humans tend to struggle with (Johnson, pp. 121–129). Overall, the map is distracting to the user and impedes bottom-up pattern building as more attention is required from top-down processes to scan for relevant information (Ware).

    Design:

    Figure 1 Improved landing page design |”Dinner Reservation” by Rafael Farias Leão is licensed under CC BY 3.0

    Explain:

    In Figure 1, the design brings the user’s attention to focus on their primary task: making a reservation. This is accomplished by bringing all the necessary elements center stage (Esser, 2017). The selections are clearly labeled and contrasted with the surrounding whitespace to allow features to be more easily detected. The stacked positioning of the selection and search boxes improves the speed and accuracy of moving from box to box per Fitts’s Law (Johnson, pp. 187–191).

    This also improves the ability of users who prefer to navigate the website with the keyboard using Tab targeting, as well as assists in keeping more of the initial reservation selections from falling outside the focus of the user into the periphery. (Johnson, p. 56) This design places “Top Rated” and “Popular Cities” around the periphery of the homepage to support the needs of users like Earl, who are interested in browsing recommendations without obfuscating the primary task of making a reservation.

    Finally, this design maintains the labeling of the “Let’s Go” button from OpenTable. Still, it increases the size for improved targeting and prominently displays the button within the user’s detection field (Ware, pp. 37–42). We changed the button’s color from red to green to take advantage of the greater contrast between surrounding colors (Johnson, p. 39). This has the added benefit of utilizing the socio-cultural schema in American society between green and the word “Go” (Marcus, 2000).

    Figure 2 Improved search page design | “Restaurant Food Icons” by macrovector_official. This image has been designed using resources from Freepik.com

    Explain:

    The design presented in Figure 2 shifts the center stage contents of Figure 1 to the header of the page while maintaining the relative size of the boxes. The map button is moved next to the city search for more consistent “chunking” during the visual search and pattern building as the user constructs a model of how the page flows (Lee et al., p. 177).

    Graphical icons of popular cuisine options are prominently displayed across the top of the page to immediately draw the user’s attention to cuisine options and allow them to begin refining their search. A scroll bar is placed just underneath the icons to convey to the users that there are more options currently off-screen. These icons serve two purposes, as noted by Johnson. In Ch. 7 of Designing with the Mind in Mind, food will quickly get a user’s attention even if we are well-fed (Johnson, p. 93). Since the user is visiting the page to select a restaurant for a reservation in which it is presumed the user will be eating food, it follows that getting the user thinking about the food they want to consume sooner than later will aid in matching the user with their ideal restaurant. But these icons also utilize graphic images to convey function, as explained in Ch. 9. This allows users to click on the pizza icon, for example, and immediately refine their search to look at the notable restaurants that serve pizza (Johnson, p. 126).

    We also employ numerous data-specific controls that exploit chunking through a visual hierarchy along the left panel to allow users to select their chosen neighborhood, cuisine options, etc. (Moran, 2016). This provides even more structure and allows users to focus more on the relevant information specific to them and their tasks (Johnson, pp. 33–34).

    Conclusion:

    While both Resy and OpenTable provide a similar service, OpenTable offers a better user interface in terms of the information processing model, particularly for older users. Resy’s layout appears to be more focused on a clean aesthetic with a minimalist approach. Still, it falls short in broad usability and appeal for consumers compared to OpenTable. Some interactive features feel awkward as they don’t conform well to stimulus-response compatibility. The map feature feels cumbersome and causes the user to spend time narrowing their selection rather than simply doing so.

    OpenTable takes advantage of the center-stage approach to interface design. It provides usability features for a broader range of users, presenting the map and list view as different modes that users can select. Overall, the OpenTable design is superior but has its own issues. Date, time, and guest selections are too spread out on the homepage and cause the user to lose track of some of the information they entered or missed entering as their eyes focus on the “Let’s Go” (Johnson, p. 56). We also see some dark UX in use as OpenTable makes a concerted effort to funnel users to promoted restaurants via “Bonus Points” (Brignall, 2019).

    We took advantage of the center-stage approach for homepage design in our design. We utilized chunking, visual hierarchy, and stimulus-response compatibility to provide an even easier-to-use interface that appeals to both our users as positioned on Nielsen’s user cube and those in between.

    References:

    Alleydog.com. (2019, 10 09). Information Processing Model. Retrieved from Alleydog.com’s Online Glossary: https://www.alleydog.com/glossary/definition- cit.php?term=Information+Processing+Model

    Brignall, H. (2019, 10 10). What are Dark Patterns? Retrieved from darkpatterns.org: https://www.darkpatterns.org/

    C.D. Wickens, J. H. (2015). Designing for People: An Introduction to Engineering Psychology and Human Performance. London: Taylor and Francis.

    Esser, P. (2017, 10 1). Center Stage — Help the User Focus on What’s Important. Retrieved from Interaction Design Foundation: https://www.interaction-design.org/literature/article/center- stage-help-the-user-focus-on-what-s-important

    Frank Ritter, G. B. (2014). Foundations for Designing User-Centered Systems. London: Springer.

    J.D. Lee, C. W. (2017). Designing for People: An Introduction to Human Factors Engineering 3rd Edition. Charleston, SC: CreateSpace.

    Johnson, J. (2014). Designing with the Mind in Mind. Waltham: Elsevier.

    Marcus, A. (2000). International and Intercultural User Interfaces. In C. Stephanidis, User Interfaces for All (p. 56). Mahwah, NJ: Lawrence Erlbaum Associates.

    Moran, K. (2016, 03 20). How Chunking Helps Content Processing. Retrieved from NN/g Nielsen Norman Group: https://www.nngroup.com/articles/chunking/

    Nielsen, J. (1993). Usability Engineering. Cambridge: AP Professional.

    Norman, K. L. (2008). Individual Differences. New York: Cambridge University Press. Saffer, D. (2010). Refinement. Berkeley: New Riders.

    Ware, C. (2008). What We Can Easily See. Burlington: Elsevier.

  • Critical Issues in Information

    The most critical issues in the field of information seem to stem from the fact that we’re awash in it, information that is. Making sense of this information and making it accessible, or at least useful to the public can only be accomplished through adaptive technology and the adaptation of that technology through the culture.

    However, both technology and culture are prone to high degrees of variation throughout both time and space.

    In order to adapt technology to the people that are intended to use it, developers need good information on user needs, values, and patterns of behavior. With today’s technological consumer base more varied and diverse than ever before, it follows that the field of information requires a work force that reflects the varied and diverse nature of a truly interconnected planet.

    Additionally, something we need to keep in mind is that Big Data and the innumerable metrics by which to measure and analyze it are creating a faster rate of change than society has ever seen. Our technological and material culture evolves more rapidly than our cultural values or indeed, our biology. Take for example the rate of automation, combined with the Protestant work ethic so ingrained into the moral fabric of the United States, and you can begin to see the core causes of the geopolitical tension regarding industries like manufacturing and energy as well as the conversations and policies surrounding social welfare, unemployment, and the economy.

    If the questions to answer are what people need to improve their lives and how can user-centered design deliver that; then the strategy to answer these questions must be a shift from the etic (outsider) to the emic (insider) perspective, and an analysis that blends the two. The analysis of Big Data leaves significant gaps that can be filled with “thick data”, or ethnography.

    For some time, products have been designed to sell, and so profit was the center for the design. Now we see that the best way to be disruptive with new technology, is to put the actual user front and center in the design process.

    According to a Gartner survey, a lot of companies are talking about and investing in Big Data, but only about 8% can do anything transformational with it. (Wang, 2013)

    image source: Big Data Dashboard Dizziness — A Trendy Tool with Little Utilization

    While a trained analyst can uncover useful insights about a population using Big Data, if you really want to know what’s going on you ask the locals. Harvard marketing professor Theodore Levitt once declared, “People don’t want to buy a quarter-inch drill. They want a quarter-inch hole!” This was a brilliant assessment from a marketing standpoint at the time and was much lauded. However, in his seminal work, “Design of Everyday Things,” Don Norman took it a couple steps further when he countered with:

    “Once you realize that they don’t really want the drill, you realize that they don’t really want the hole either, they want to install their bookshelves. Why not develop bookshelves that don’t require holes? Or perhaps books that don’t require bookshelves? (i.e. eBooks)” (Norman, 2013)

    Norman, D. (2013). The Design of Everyday Things. Philadelphia: Basic Books.

    Wang, T. (2013, 5 13). Why Big Data Needs Thick Data. Retrieved from ethnography matters: https://medium.com/ethnography-matters/why-big-data-needs-thick-data-b4b3e75e3d7


    Originally published at mtthwx.com/ on March 21, 2019.