nasa medical system foundation redesign
overview
The challenge faced by the Exploration Medical Capability (ExMC) Element of the Human Research Program is presenting medical system foundation content in the model to non-modelers. We have to do so in a way people can understand with limited to no training in MBSE or the Systems Modeling Language (SysML) without using the modeling tool. I led the redesign in 2020. (click here to see user research for this project)
What i did
Led UX: Drove clarity on the product system of features and functionality by regularly meeting with PM and Eng leads to comb through all features to identify what we would migrate, deprecate or evolve.
Updated architecture: Created an updated architecture that considered the previous system, ensuring all features worked in concert to deliver clear navigation.
Respected the past: Audited and itemized all of the UI components, their states and usage; then worked with Eng and PM teams to ensure all pieces of the system were accounted for.
Responsive design: The material grid did not account for the required two-column layout. I created a responsive grid that scaled down to a tablet and documented how all components responded.


impact
This User Interface (UI) Modeling Design System specifies a standard for all ExMC model reports, including the landing page layout and associated sub-pages with uniform fonts, colors, and icons. For example, this incorporated the standard practice of using blue text for hyperlinks to make the interface more navigable to non-modelers. Applying these standards to the model report produced a more positive user experience.
A model-based systems engineering journey to developing a concept of operations (ConOps). (Cohen, J., et al. 2022, in 2022 IEEE Aerospace Conference.)
Initially, ExMC did not impose a common model of organization across all model-based projects. This resulted in difficulty for team members and stakeholders to find similar model elements across these projects as their locations varied. To mitigate this challenge, the ExMC Systems Engineering team adopted a similar model organization for all model-based projects. This commonality makes it easier for ExMC systems engineers to move across projects because each model groups similar artifacts using the same package structure.
A model-based systems engineering journey to developing a concept of operations (ConOps). (Cohen, J., et al. 2022, in 2022 IEEE Aerospace Conference.)
The UI Modeling Design System also assists model stakeholders, who review content across multiple models and may not have SysML experience. Providing these stakeholders with easily accessible model reports, with content arrangement like that in a document-based ConOps, eases the reviews and is a key element in the ExMC’s transition to model-based ConOps.
A model-based systems engineering journey to developing a concept of operations (ConOps). (Cohen, J., et al. 2022, in 2022 IEEE Aerospace Conference.)
This approach has numerous advantages, including centralizing all relevant project information and documentation, improving ease and reliability of tracing from end-to-end; facilitating a more efficient understanding of project information; enabling quicker decision-making; and improving communication with stakeholders regarding medical system requirements and content.
A model-based systems engineering journey to developing a concept of operations (ConOps). (Cohen, J., et al. 2022, in 2022 IEEE Aerospace Conference.)
Additionally, key lessons learned during this transition process were outlined, such as user interaction with the model information, user acceptance of the transformation, and the agile workflow established to develop the model and its integration into the project plan.
A model-based systems engineering journey to developing a concept of operations (ConOps). (Cohen, J., et al. 2022, in 2022 IEEE Aerospace Conference.)



Publications
- Presenting Model-Based Systems Engineering Information to Non-Modelers.
(Cohen, J. R., et al. 2021, in 2021 IEEE Aerospace Conference.) - A model-based systems engineering journey to developing a concept of operations (ConOps). (Cohen, J., et al. 2022, in 2022 IEEE Aerospace Conference.)
user research
Initially, the client was only interested in implementing a new wireframe following a prior round of usability testing and then testing it. However, as the project progressed, I learned that new system models were expected to ramp up significantly. Therefore, the project’s goals revolved around developing DesignOps and reusable assets as a design system.
These goals included creating more efficient workflows, improving the quality and impact of design outputs, evaluating the interface to comply with usability principles, and assessing how stakeholders use the model interface to surface top pain points.
RESEARCH METHODOLOGY
For this project, I utilized four different research methods to fit the timeline and particular constraints of the project. These methods were design ethnography, RITE (Rapid Iterative Testing and Evaluation), heuristic evaluation, and usability testing.
DESIGN ETHNOGRAPHY
The first month of the project focused on immersing myself in the context of model-based systems engineering. As I was unfamiliar with MBSE practices, I completed a certification in MBSE from UB SUNY on Coursera as part of my desk research.
However, due to difficulty in obtaining permission to use the MBSE software, MagicDraw, from NASA IT, I adopted design ethnography to better understand MBSE practices, team composition, and what was required from me in this project. The design ethnography method combines participant observation with participatory action and design research, making it useful in generating insights into users and teams and improving their design operations.
As engaging in such a context can come with its own challenges, such as working with a distributed team of busy, highly technical people who communicate mostly through email and various groups in Microsoft Teams, I observed interactions between team members during biweekly sprint planning and element meetings, as well as weekly standups. Additionally, I arranged a number of interviews during this period with participants selected based on relevance to the representative user profiles, including
- 5 SE/MBSE’s
- 6 clinicians (physicians, nurses, pharmacists)
- 1 element scientist.
During the interviews, I asked participants about their review process and gathered insights into their varying information needs, pain points about milestone reviews, and common interaction patterns across user groups. These insights were used to derive several specific questions of interest related to how different users access and use the model.
ANALYSIS AND SYNTHESIS
A thematic analysis was performed on transcriptions of participant responses to extract key themes. The analysis focused on three criteria related to potential weaknesses/serious issues in the user interface:
- What are user groups primary interests when conducting milestone reviews on medical system foundations?
- What are user groups primary pain points with regard to milestone reviews?
- How can we promote consistency in UI among MBSE’s?
FINDINGS
Among our top findings were:
- Varying information needs: Different user groups prioritize different information, making it important that all information is easily accessible.
- Single “source of truth”: The model serves as a single source of truth for stakeholders to align on and promote consistency in presentation.
- Overview first, zoom and filter, details on demand: Schneiderman’s classic mantra proved the simplest data visualization pattern to satisfy all groups.
These findings derived several specific questions of interest to fine-tune the interface across user groups utilizing the RITE method. These questions addressed the different needs of clinicians, engineers, and mission management.
RECOMMENDATIONS
As both the researcher and designer, I aimed to apply human-centered design principles through rapid prototyping and evaluation with user data. While we understand that achieving a 100% correct design is impossible, we strive to adopt the Pareto Principle, also known as the 80/20 rule, suggesting that 20% of the effort generates 80% of the results. In simpler terms, I wanted to get feedback on my initial design before investing significant time and effort in the wrong direction.
RAPID ITERATIVE TESTING AND EVALUATION
Although I could not access the actual model initially, I had access to a trial version of MagicDraw, which allowed me to develop a designer’s workflow and set up a sandbox to design in. I discovered that the original interface designed by the MBSE lead was built using a content diagram instead of MagicDraw’s UI tools for designing the HTML report.
Therefore, I had to produce step-by-step documentation on unlocking the UI design capabilities within MagicDraw. Initially, I was tasked with implementing an updated design based on a wireframe from a previous contractor.

However, I proposed a simpler design that utilized common interaction patterns to improve usability, such as a document-style interface.
To demonstrate the value of this approach, I backed up my design rationale with user data, which showed reduced errors and task failures over time as changes to the interface were made using the RITE method.

Utilizing Success Criteria Scoring (SCS), I tracked errors, failures, and fixes in the interface.
Total problems found | Problems receiving a fix | Impact ratio |
28 | 28 | 100% |
This allowed me to calculate the impact and re-fix ratios, which showed the effectiveness of the RITE approach in finding and fixing problems.
Total fixes (including “re-fixes”) | Changes that needed “re-fixing” | Re-fix ratio |
34 | 6 | 18% |
For example, participants had difficulty navigating to specific documentation within the model, but implementing a Home button and breadcrumb UI pattern helped solve this issue. However, it became clear that the breadcrumb pattern would not be sustainable if engineers creating the interface had to manually recreate it. Therefore, we turned to Schneiderman’s mantra: Overview first, zoom and filter, and details on demand.
HEURISTIC EVALUATION
The heuristic evaluation technique is a quick and effective usability method that enables design teams to identify and fix easily observable usability issues, allowing for a more efficient use of time with real users during usability testing. Jakob Nielsen’s 10 Usability Heuristics for UI Design serve as a useful set of general design principles that can be incorporated into good design practice. Cognitive walkthroughs can complement heuristic evaluations by providing evaluators with a user perspective, particularly useful when combined with personas.

When performing a heuristic evaluation, it is typical to use 3-5 evaluators to synthesize findings and recommendations. However, it is important to keep in mind that evaluators may misrate the severity of an issue or identify issues that do not actually exist, making it necessary to follow up with usability testing to validate findings.
No usability problem | Cosmetic problem | Minor usability problem | Major usability problem | Usability catastrophe |
0 | 1 | 2 | 3 | 4 |
FINDINGS
In this evaluation, the only remaining violations were limitations with the tool, such as the lack of support to help users recognize, diagnose, and recover from errors, which violated Nielsen’s 9th heuristic. It is important to note that being too close to the design may lead to biased opinions. However, this can be remedied by testing the hypothesis that real users sufficiently use the design.
USABILITY TESTING
As my time on the project came to a close, I conducted a round of usability testing on a single, final iteration of the interface using five participants from the same stakeholder groups who had not participated in an earlier iteration.
TECHNICAL PLAN
The testing took place over the course of a week using Microsoft Teams to host and record the sessions. Each session was scheduled for 1.5 hours, with thirty minutes as a buffer for discussion or technical issues.
TOOLS AND INSTRUMENTS
To prepare for the test, I drafted a usability test checklist from the tasks and steps defined in the heuristic evaluation and created a document for participants that outlined these tasks and provided short scenarios for context. During the test, I asked participants to adopt the “think aloud” protocol and gently reminded them of this if they were quiet for a long time.
I used the Success Criteria Scoring (SCS) metric to score each step of the overarching task within the test as either a pass (1), struggle (0), or fail (-1). After completing all the tests, I used this data to calculate the success rate using the formula (S+(P*0.5))/O, where O is equivalent to the number of possible scores. To identify where participants had the most trouble with the interface, I calculated each step’s differential (sum of scores minus count of scores).
Success | Pass | Fail |
1 | 0 | -1 |
I also used the Single Ease Question (SEQ) and the Usability Metric for User Experience (UMUX-Lite) metrics to gather quantitative data on the ease of use and overall user experience, respectively. The usability testing also collected qualitative data through recording participants’ thoughts and feelings and conducting follow-up questions and debriefs.
FINDINGS
The usability testing revealed virtually no usability issues with the interface itself. There continues to be some minor confusion over jargon across user-groups
The only major issue that remained related to the permissions required to use the tool and access the interface and linked documents, which was beyond my control and was part of the constraints of the MagicDraw and Cameo Collaborator software.
Leave a Reply