Blog

  • What about Personas?

    As we were going over Personas in my Interaction Design course at UMSI, I began seeing some articles on the topic that I wanted to share with the class.

    Kill your Personas — Microsoft Design

    Stop obsessing over user personas — UX Collective

    The discussion we had also correlates with an issue I’m having with the MacLean et al., reading. While I overall found Design Space Analysis highly informative and useful for the design process, I’m hung up on QOC as argument based. As interdisciplinary as Design Thinking is, when we justify our decisions through arguing for our rationale rather than proof, we end up in effect making excuses for what we did based on our own internal logic.

    Models are only useful until they aren’t. Models, analogies, metaphors and the like, are kind of like stents that force a communication channel open to cram more information through than that channel could withstand otherwise. The experts who develop the models have a better understanding of that model’s limits and drawbacks than the person who is introduced to the concept through the model. So we need to really hone our instincts so we know when to break our own rules. A good recent example of this is the information processing model we spent the first half of the semester in 588 learning. Everything about vision, perception, attention and memory that we just learned in that class was related through this model. But that’s not what our brains look like, how do we know where that model breaks down? How many generations removed are we from the experts that developed it?

    Despite what the rationalists think, logic occurs inside the individual. It’s good that we abstract data to create personas as noted from the readings. But as discussed in the articles above, we tend to ascribe erroneous details to these personas that come from our internal logic rather than the data. Ultimately I think this results in a holistic thinking that’s rather hollow. As Sapolsky notes in his tome on human behavior, rationalism is most often rationalizing away violence as just part of human nature. We aren’t wired for [this], we didn’t evolve for [that]. Neither are we a ‘tabula rasa’ or a clean slate. We are born with an array of biological behavioral propensities that are cultivated through environmental inputs and our reaction to them.

    The Sapir-Whorf hypothesis suggests that the words we use, shape our perceptions of the world. We can only think in terms of the words we know how to think in. When we enter a design process as non-experts we are looking to the user research to drive insight that give us a sense of holistic expertise. When we justify based on arguing rationale rather than proof, we employ rationalism, which essentially holds that whoever wins the argument is right, or at least closer to the truth than those who lost. As they say, history is written by the victors.

    I say all of this because I have a growing concern that the interdisciplinary approach is starting to appear somewhat shallow and self-congratulatory. Like Dr. Malcolm said in Jurassic Park, we were “so preoccupied with whether or not we could, we didn’t stop to think whether or not we should.” Businesses scrutinize every penny and I see a future of tight deadlines and budgetary concerns where we fudge user research and employ our own inner logic to advocate for our own crappy designs while we post inspirationals on Instagram, repeating that saying, “You are not your user.”

    But maybe we should be.

    References:

    “Chapter 5: Structured Findings” in Saffer, D. (2010). Designing for interaction: Creating innovative applications and devices (2nd ed.). Berkeley, CA: New Riders.

    MacLean, A., Young, R. M., Bellotti, V. M. E., & Moran, T. P. (1991). Questions, options, and criteria: Elements of design space analysis. Human-Computer Interaction, 6(3–4), 201–220. (through section 2)

    Case Study: http://vesperapp.co/blog/how-to-make-a-vesper/

    “Chapter 5: Picking the Right Tool” in Warfel, T. Z. (2009). Prototyping: A Practitioner’s Guide. Brooklyn, NY: Rosenfeld Media.

    Chapters 6–11 in Warfel, T. Z. (2009). Prototyping: A Practitioner’s Guide. Brooklyn, NY: Rosenfeld Media.

    GUI Prototyping Tools: http://c2.com/cgi/wiki?GuiPrototypingTools


    Originally published at http://mtthwx.com on November 15, 2019.

  • Resy and OpenTable: a comparative case study

    The goal of this report is three-fold. It seeks to compare and evaluate two competing websites in terms of human-computer interaction with a focus on the information processing model. This model likens our cognitive processes to how a computer works. Using this critique, we will then propose a new design, and justify why this design is an improvement over these two sites (Wickens, Hollands, Banbury & Parasuraman 2015, pp. 3–5).

    image source: https://dataworks-ed.com/blog/2014/07/the-information-processing-model/

    Compare:

    For comparison, this report will consider two users with individual differences on Jakob Nielsen’s (1993) user cube, as shown below.

    Janet is a cohort 2 Baby Boomer and domain expert with minimal computer expertise (Norman, 2008). She’s made her career in hospitality and marketing, beginning her first restaurant position as a hostess in high school. She was responsible for taking reservations the “old-fashioned” way, by phone.

    The “user cube”. J. Nielsen, Usability Engineering

    Janet worked her way up to restaurant manager by her early 30s, and now in her late 50s, she’s the regional manager of a franchise bar and grill. She’s comfortable enough on the computer to complete her tasks, mostly related to work, but spends little time online. Her task is to book a reservation for her and the eight General Managers in her region to celebrate a great quarter. We can imagine she is looking for a reservation for 10/26/2019 from 6–9 pm at a restaurant that serves alcohol and accommodates vegetarian and gluten-free options.

    Earl is a high school senior, Gen Z, preparing for his first date with his new girlfriend on 10/19/2019 for Sweetest Day. As such, he’s ignorant about the domain (both making reservations and dating) but has relatively extensive computer experience. Earl hopes the website will show him a good recommendation for a romantic evening at a restaurant in a teenager’s price range. As we compare these two websites, consider Janet and Earl and their tasks at hand. For them, how do OpenTable and Resy compare?

    We find Janet confused by Resy more than by OpenTable. As she arrives on the homepage, she understands she can click “Detroit” and “Guests” to select her options. There is a downward-facing carat to suggest a dropdown menu once clicked. As she hovers over these menus, she notes that the cursor turns into a hand, which provides immediate feedback, and causes her to begin building a mental model of how the site works.

    She is puzzled by how to select her specific date; the cursor doesn’t change when she hovers over “Today,” and there is no carat to suggest she should click on the word. This is inconsistent with the internal model she is building, as the colored words should suggest “clickable” in conjunction with a hand cursor upon hovering. Here we find a missed opportunity to exploit redundancy, which results in a design that doesn’t immediately support the maximization of automaticity and unitization (Lee, Wickens, Liu & Boyle p. 170). And while this also slowed Earl down, his level of computer expertise and habituation from other websites informs his decision to click anyway to see what, if anything, happens (Johnson, p. 5).

    When Janet goes to “View all Detroit Restaurants” via the search menu, the long list of locations is sorted in no order. The screen is split between the restaurants and a map pointing out all the participating restaurants in the area but with no corresponding information. Even hovering over points on the map yields no new information. Only by clicking on a point will the user see movement in their peripheral vision as the list of restaurants on the left side of the screen moves to bring the selected restaurant to the top of the panel.

    Simultaneously, a pop-out feature displays the selected restaurant’s information in a box over the map pinpointing the user just clicked. Still, Janet initially misses this pop-out information which is overshadowed by the movement in her peripheral vision (Ware, pp. 27–35). Additionally, the scroll bar on the far right of the screen is mapped to the restaurant list on the left, with the map separating the two, a clear failure to design for stimulus-response compatibility (Ritter, Baxter & Churchill, 2014).

    By comparison, Janet has a much easier time figuring out how to navigate OpenTable. The center of OpenTable’s homepage is consumed by the main feature, making a reservation. She can immediately see how to select her chosen date, time, and the number of guests. The “Let’s Go” button is easily recognizable as a button, signifying clickability combined with what Saffer refers to as feed-forward; the button’s label tells the user what will happen before clicking the button (Saffer, 2010, p. 133).

    Clicking on the “Let’s Go” button, she is presented with a long list of restaurants as well as a “Map” button and a variety of options chunked on the left side of the screen, creating meaningful sequences that she can select in order to narrow down her search (Lee et al., p. 177).

    However, upon clicking the button and being taken to the next screen, we find a box featured in the center of the screen labeled “Restaurants with Bonus Points.” What are bonus points? On the top right of the box, we see a link labeled “About Bonus Points,” but even after clicking this link, it is not clear what bonus points are or how they work as we are taken to a new page with a list of articles to sift through to learn more. This disrupts the user and largely distracts them from making a reservation. Now their attention is being spent on information regarding bonus points filling up their working memory (Johnson, pp. 90–94).

    Overall, OpenTable is more consistent in properly detecting and applying the appropriate interactive features to carry out tasks the user wishes to perform. OpenTable offers the map-level view as an option but improves the design by providing a scroll bar right next to the list of restaurants to which the scroll bar is mapped. On the other hand, Earl’s expertise with computers gives him an edge in that he can figure out both sites eventually, albeit initially confused by the Resy interface and found it less intuitive and more difficult to navigate according to his model of how website navigation typically works in line with stimulus-response compatibility (Ritter et al., 2014).

    OpenTable’s design draws the user’s eyes to the center of the screen and keeps them there. It strategically arranges supporting information around the periphery in an easily understandable format, allowing users to quickly perform a visual search supporting pattern building from the bottom up. In contrast, the user’s top-down processes reinforce relevant information (Ware, pp. 8–17).

    Resy is arranged to be viewed left to right and top to bottom, but it doesn’t lend itself well to a clear use as the user’s eyes scan over the menu of cities to select from, even though the website has already detected the user’s location. The elements involved in initiating the search and booking a reservation are less distinct from the rest of the page and blend in somewhat with the white space across the header (Lee et al., p. 109).

    Viewing all restaurants focus the eyes on the map, which presents no further information aside from an array of pinpoints. In contrast, more prevalent information is situated around the screen’s periphery.

    How does the user determine what pinpoint they should bother clicking on? If they have clicked on a few points already, how can they tell which points were already clicked? Rather than supporting user recognition of where they’ve already clicked, Resy forces the users to recall it for themselves, which humans tend to struggle with (Johnson, pp. 121–129). Overall, the map is distracting to the user and impedes bottom-up pattern building as more attention is required from top-down processes to scan for relevant information (Ware).

    Design:

    Figure 1 Improved landing page design |”Dinner Reservation” by Rafael Farias Leão is licensed under CC BY 3.0

    Explain:

    In Figure 1, the design brings the user’s attention to focus on their primary task: making a reservation. This is accomplished by bringing all the necessary elements center stage (Esser, 2017). The selections are clearly labeled and contrasted with the surrounding whitespace to allow features to be more easily detected. The stacked positioning of the selection and search boxes improves the speed and accuracy of moving from box to box per Fitts’s Law (Johnson, pp. 187–191).

    This also improves the ability of users who prefer to navigate the website with the keyboard using Tab targeting, as well as assists in keeping more of the initial reservation selections from falling outside the focus of the user into the periphery. (Johnson, p. 56) This design places “Top Rated” and “Popular Cities” around the periphery of the homepage to support the needs of users like Earl, who are interested in browsing recommendations without obfuscating the primary task of making a reservation.

    Finally, this design maintains the labeling of the “Let’s Go” button from OpenTable. Still, it increases the size for improved targeting and prominently displays the button within the user’s detection field (Ware, pp. 37–42). We changed the button’s color from red to green to take advantage of the greater contrast between surrounding colors (Johnson, p. 39). This has the added benefit of utilizing the socio-cultural schema in American society between green and the word “Go” (Marcus, 2000).

    Figure 2 Improved search page design | “Restaurant Food Icons” by macrovector_official. This image has been designed using resources from Freepik.com

    Explain:

    The design presented in Figure 2 shifts the center stage contents of Figure 1 to the header of the page while maintaining the relative size of the boxes. The map button is moved next to the city search for more consistent “chunking” during the visual search and pattern building as the user constructs a model of how the page flows (Lee et al., p. 177).

    Graphical icons of popular cuisine options are prominently displayed across the top of the page to immediately draw the user’s attention to cuisine options and allow them to begin refining their search. A scroll bar is placed just underneath the icons to convey to the users that there are more options currently off-screen. These icons serve two purposes, as noted by Johnson. In Ch. 7 of Designing with the Mind in Mind, food will quickly get a user’s attention even if we are well-fed (Johnson, p. 93). Since the user is visiting the page to select a restaurant for a reservation in which it is presumed the user will be eating food, it follows that getting the user thinking about the food they want to consume sooner than later will aid in matching the user with their ideal restaurant. But these icons also utilize graphic images to convey function, as explained in Ch. 9. This allows users to click on the pizza icon, for example, and immediately refine their search to look at the notable restaurants that serve pizza (Johnson, p. 126).

    We also employ numerous data-specific controls that exploit chunking through a visual hierarchy along the left panel to allow users to select their chosen neighborhood, cuisine options, etc. (Moran, 2016). This provides even more structure and allows users to focus more on the relevant information specific to them and their tasks (Johnson, pp. 33–34).

    Conclusion:

    While both Resy and OpenTable provide a similar service, OpenTable offers a better user interface in terms of the information processing model, particularly for older users. Resy’s layout appears to be more focused on a clean aesthetic with a minimalist approach. Still, it falls short in broad usability and appeal for consumers compared to OpenTable. Some interactive features feel awkward as they don’t conform well to stimulus-response compatibility. The map feature feels cumbersome and causes the user to spend time narrowing their selection rather than simply doing so.

    OpenTable takes advantage of the center-stage approach to interface design. It provides usability features for a broader range of users, presenting the map and list view as different modes that users can select. Overall, the OpenTable design is superior but has its own issues. Date, time, and guest selections are too spread out on the homepage and cause the user to lose track of some of the information they entered or missed entering as their eyes focus on the “Let’s Go” (Johnson, p. 56). We also see some dark UX in use as OpenTable makes a concerted effort to funnel users to promoted restaurants via “Bonus Points” (Brignall, 2019).

    We took advantage of the center-stage approach for homepage design in our design. We utilized chunking, visual hierarchy, and stimulus-response compatibility to provide an even easier-to-use interface that appeals to both our users as positioned on Nielsen’s user cube and those in between.

    References:

    Alleydog.com. (2019, 10 09). Information Processing Model. Retrieved from Alleydog.com’s Online Glossary: https://www.alleydog.com/glossary/definition- cit.php?term=Information+Processing+Model

    Brignall, H. (2019, 10 10). What are Dark Patterns? Retrieved from darkpatterns.org: https://www.darkpatterns.org/

    C.D. Wickens, J. H. (2015). Designing for People: An Introduction to Engineering Psychology and Human Performance. London: Taylor and Francis.

    Esser, P. (2017, 10 1). Center Stage — Help the User Focus on What’s Important. Retrieved from Interaction Design Foundation: https://www.interaction-design.org/literature/article/center- stage-help-the-user-focus-on-what-s-important

    Frank Ritter, G. B. (2014). Foundations for Designing User-Centered Systems. London: Springer.

    J.D. Lee, C. W. (2017). Designing for People: An Introduction to Human Factors Engineering 3rd Edition. Charleston, SC: CreateSpace.

    Johnson, J. (2014). Designing with the Mind in Mind. Waltham: Elsevier.

    Marcus, A. (2000). International and Intercultural User Interfaces. In C. Stephanidis, User Interfaces for All (p. 56). Mahwah, NJ: Lawrence Erlbaum Associates.

    Moran, K. (2016, 03 20). How Chunking Helps Content Processing. Retrieved from NN/g Nielsen Norman Group: https://www.nngroup.com/articles/chunking/

    Nielsen, J. (1993). Usability Engineering. Cambridge: AP Professional.

    Norman, K. L. (2008). Individual Differences. New York: Cambridge University Press. Saffer, D. (2010). Refinement. Berkeley: New Riders.

    Ware, C. (2008). What We Can Easily See. Burlington: Elsevier.

  • Neural Networks for Cultural Transmission

    For a while now, I’ve been mulling over an idea: what if artificial intelligence could develop and transmit its own culture? While AI excels at recognizing patterns and optimizing processes, it’s missing something profoundly human—an algorithm for cultural dynamics. The idea sat on the back burner for years, but after being admitted to UMSI and committing to a UX research track, it feels like the right time to start exploring it in earnest.

    The Seed of the Idea

    Back in my undergrad days at Wayne State, I didn’t even realize there was an anthropologist on campus, Dr. Robert G. Reynolds, working on what he called cultural algorithms. His lab wasn’t in the anthropology department—it was in computer science, tied to engineering. When I stumbled across his work, I was fascinated. His paper, “Cultural Algorithms: Computational Modeling of How Cultures Learn to Solve Problems”, details how cultural algorithms are used to simulate and understand how cultures adapt to challenges.

    It turns out Dr. Reynolds is now a visiting research scientist at the University of Michigan Museum of Anthropological Archaeology. He’s working on developing digital simulations to help the public explore how cultures evolve—a perfect example of blending anthropology, technology, and public engagement.

    My idea is more speculative and rooted in science fiction: to create a kind of cultural algorithm that allows AI to not just simulate human cultures but to develop its own. It’s the concept of an AI with a distinct, evolving cultural identity.

    A Summer of Learning

    When I first came up with this idea, I had no real understanding of the technical challenges it posed. I’ve since started to bridge that gap. Over the summer, I dove into Python basics through Dr. Chuck’s “Python for Everybody” course, a fantastic resource hosted by a UMSI professor. Whether you’re a beginner or someone just curious, I highly recommend it. Even if you copy/paste the code at first, it’s an excellent introduction to programming concepts.

    As I’ve gained more technical literacy, I’ve come to realize that “cultural algorithm” might not be the right term for what I’m envisioning. Instead, I’ve started thinking about neural networks for cultural transmission. Neural networks are AI systems that process inputs and generate outputs by passing information through multiple “hidden layers.” Those hidden layers—where the magic happens—feel like a good analogy for the complexities of cultural dynamics.

    The Challenge of Cultural Transmission

    Cultural transmission is a messy, human process. Teach the same lesson to ten students, and you might end up with ten different interpretations. Learning isn’t just about inputs and outputs; it’s about how individuals filter information through their personal experiences, biases, and social contexts.

    This variability is key to what makes culture so rich—and it’s what makes modeling cultural transmission in AI so challenging. If AI could replicate this variability, it might not just mimic culture but participate in it.

    Fortunately, the study of cultural transmission already has a foundation in anthropology and related fields. Researchers are exploring topics like the cultural evolution of communication and the mechanisms of intergenerational knowledge transfer. For example, if one of those ten students misunderstands the lesson, they might refine their understanding by learning from a peer who grasped it more accurately. Could AI replicate this peer-to-peer refinement process?

    Building the Foundations

    To start exploring this, I’m setting up an environment for developing neural networks using Keras with TensorFlow. I’m not an expert, but the internet is an incredible resource. One series I’m starting with is Tech With Tim’s tutorials.

    My approach is hands-on and iterative: experiment, fail, and learn from those failures. The hardest part will be designing hidden layers that simulate the nuances of cultural variation and transmission. But with a mix of anthropology, programming, and determination, I believe it’s worth trying.

    Why It Matters

    Why bother with something as abstract as cultural transmission in AI? Because it’s about more than just AI. It’s about understanding humanity. By teaching AI to “learn” culture, we could gain new insights into how humans create, share, and adapt knowledge. It’s not about replacing human culture but expanding our understanding of it.

    And who knows? Maybe one day, we’ll create an AI that isn’t just functional but truly cultural—an AI that learns, grows, and connects like we do.

    If you’re intrigued by these intersections of anthropology, AI, and UX, I’d love to hear your thoughts. Let’s explore this frontier together.

  • Critical Issues in Information

    The most critical issues in the field of information seem to stem from the fact that we’re awash in it, information that is. Making sense of this information and making it accessible, or at least useful to the public can only be accomplished through adaptive technology and the adaptation of that technology through the culture.

    However, both technology and culture are prone to high degrees of variation throughout both time and space.

    In order to adapt technology to the people that are intended to use it, developers need good information on user needs, values, and patterns of behavior. With today’s technological consumer base more varied and diverse than ever before, it follows that the field of information requires a work force that reflects the varied and diverse nature of a truly interconnected planet.

    Additionally, something we need to keep in mind is that Big Data and the innumerable metrics by which to measure and analyze it are creating a faster rate of change than society has ever seen. Our technological and material culture evolves more rapidly than our cultural values or indeed, our biology. Take for example the rate of automation, combined with the Protestant work ethic so ingrained into the moral fabric of the United States, and you can begin to see the core causes of the geopolitical tension regarding industries like manufacturing and energy as well as the conversations and policies surrounding social welfare, unemployment, and the economy.

    If the questions to answer are what people need to improve their lives and how can user-centered design deliver that; then the strategy to answer these questions must be a shift from the etic (outsider) to the emic (insider) perspective, and an analysis that blends the two. The analysis of Big Data leaves significant gaps that can be filled with “thick data”, or ethnography.

    For some time, products have been designed to sell, and so profit was the center for the design. Now we see that the best way to be disruptive with new technology, is to put the actual user front and center in the design process.

    According to a Gartner survey, a lot of companies are talking about and investing in Big Data, but only about 8% can do anything transformational with it. (Wang, 2013)

    image source: Big Data Dashboard Dizziness — A Trendy Tool with Little Utilization

    While a trained analyst can uncover useful insights about a population using Big Data, if you really want to know what’s going on you ask the locals. Harvard marketing professor Theodore Levitt once declared, “People don’t want to buy a quarter-inch drill. They want a quarter-inch hole!” This was a brilliant assessment from a marketing standpoint at the time and was much lauded. However, in his seminal work, “Design of Everyday Things,” Don Norman took it a couple steps further when he countered with:

    “Once you realize that they don’t really want the drill, you realize that they don’t really want the hole either, they want to install their bookshelves. Why not develop bookshelves that don’t require holes? Or perhaps books that don’t require bookshelves? (i.e. eBooks)” (Norman, 2013)

    Norman, D. (2013). The Design of Everyday Things. Philadelphia: Basic Books.

    Wang, T. (2013, 5 13). Why Big Data Needs Thick Data. Retrieved from ethnography matters: https://medium.com/ethnography-matters/why-big-data-needs-thick-data-b4b3e75e3d7


    Originally published at mtthwx.com/ on March 21, 2019.

  • Examples of Good and Bad UX/UI in World of Warcraft

    Initially, I was going to just discuss Spotify and Snapchat as examples of good and bad UX. Then it dawned on me to discuss the game World of Warcraft as an example of both.

    The standard UI for the game has a classic feel to it, but is rather clunky and difficult to use in order to play a game with this level of interactive complexity.

    world-of-warcraft-3

    However, the game allows for the use of third party addons, or mods, which modify the UI to augment gameplay and overall user experience. I think this is just brilliant. And while mods are increasingly common amongst big online games, I’m not much of a gamer, I’m really just a childhood fan of Lord of the Rings who always wanted to play Dungeons and Dragons but I lived on a farm in the boondocks and couldn’t get a group together.

    This actually brings me to another point, in that as a casual player who isn’t a gamer, I just log in from time to time to scratch an itch. As do many other people. People like me would be completely lost without these addons. So in that sense, they really do improve accessibility for us to enjoy the game and even be competitive.

    Some examples of this include and addon called GTFO (Get The F*** Out). This addon sounds an alarm whenever I’m standing in fire, or acid or something that causes damage to my character. This happens a lot, and with everything else that is going on at the same time-

    wowow

    many players will just stand there and either die or become a nuisance to the player(s) charged with healing them.

    Another downside to the standard UI is navigation. Now I don’t mean navigating through the interface, I mean using the interface to navigate this mind boggling massive digital universe. I say universe because this game takes place on multiple worlds, at different times and different dimensions and is ever expanding.

    One of the major components of the gameplay is exploring this universe by completing quests. While the standard UI does provide some tools such as marking the map and listing quest objectives on the side of the screen as a HUD or Heads Up Display, it can leave you confused, wandering around as a ghost trying to find your body. So a player who also is a developer created an app called TomTom that acts as a navigation arrow in the same vein as GPS navigation, pointing the way to your desired destination. You can set your destination by coordinates, CTRL + Right Click on the map, etc. It even tells you how many “yards” you are from your destination and how long it will take to reach it given current speed and direction. It even allows you to save points on the map so that you can navigate back to interesting or important places not otherwise notated.

    These are just two examples of literally thousands of addons developed by the players themselves.

    While I find the standard UI to be rather lacking and indicative to a poor UX overall, I also think it is brilliant for the control it gives to the user to modify and control their entire interface.

    This last example elaborates on my post and demonstrates how players use addons to augment their gameplay. Just for reference, I use just over 100 addons for my basic UI setup, many of which only activate when I am in a certain zone of the game geographically or playing one of the mini games.

    wow-ss-ui-annotated

    World of Warcraft addons can be found on various websites. Among the most popular are Twitch, which bought Curse, and Tukui, home of ElvUI.