← Work
AI / UX Research2025Team Project

Health Translator:
AI That Earns Its Place

Secondary ResearchPrimary ResearchInterviewsSurveysTrust-Centered DesignInformation ArchitectureMultimodal InteractionPlain Language

The Problem

Medical records and patient portal content are written at a graduate reading level. The average American adult reads at an 8th-grade level. That gap is not a technicality. It means that millions of patients receive test results, diagnosis summaries, and care instructions they genuinely cannot parse. They log into portals, see clinical language, and either guess at meaning, give up, or wait to ask a doctor at a follow-up that may be weeks away.

Health literacy is a social determinant of health. Poor health literacy correlates with worse outcomes, lower medication adherence, and higher rates of preventable hospitalization. The portal, the technology that was supposed to democratize access to personal health information, often makes this worse, not better. It moves the record closer while keeping it just as opaque.

Our team set out to design an AI-powered plain-language translation layer for healthcare portals: something that converts clinical language to accessible language on demand, meets patients where their actual literacy level is, and does not require them to do anything except ask.

The Landscape

The US patient portal market is projected to grow from $3.5 billion to over $19 billion by 2033, with 90% of hospitals already offering portals. Despite that scale, adoption barriers persist. Our secondary research identified three critical UX gaps: low health literacy drives avoidance due to medical jargon and complex interfaces; the 50+ age group faces low adoption from technical challenges and poor internet access; and chronic illness patients cite physician unresponsiveness and interface complexity as primary barriers.

The regulatory environment is tightening in ways that make this work urgent. The HTI-Final rule is driving AI integration requirements across healthcare IT, and HIPAA constraints demand that any AI layer maintain strict privacy compliance. Industry leaders like Epic Systems are already moving toward proactive AI-driven workflow automation. Meanwhile, consumer expectations shaped by Apple and Netflix mean patients expect personalized, seamless experiences from their healthcare tools too.

The paradox at the center of all this: 95% of patients prefer immediate test result access through portals, yet the barriers to actually using them remain high. That gap between desire and usability is where this project lives.

The Research

We grounded this project in primary research, interviews and surveys with patients and healthcare users, because the design space here is genuinely different from what the literature suggests it is. People do not talk to researchers the same way they talk to their doctors. We wanted to hear directly what was frustrating, what was confusing, and what would actually feel safe to use.

What came back from the research was clarifying. Participants were not confused about what they did not understand. They were frustrated that understanding felt inaccessible. The distinction matters. They knew the language was technical. They did not lack intelligence. They lacked a key. When given one, they used it.

We also heard consistent anxiety about the portal itself: about what records they might find, about whether they would be notified before their doctor called, about reading a value without knowing if it was normal. The AI translation concept landed well in research sessions, but only when participants understood what it was doing and felt like they could override or question it. That nuance became the entire design problem.

The Design Challenge: Trust

Healthcare is high stakes. That sounds obvious until you try to design an AI feature for it and realize how thoroughly the stakes change every assumption you might bring from consumer product design.

In low-stakes contexts like music recommendations, search autocomplete, or email drafts, AI errors are annoying. In healthcare, an AI that mistranslates a lab result or softens language about a serious finding is not a bad user experience. It is a potential harm. Participants in our research understood this intuitively. When we probed their skepticism about AI in healthcare, they were not being irrational or technophobic. They were doing exactly the right thing: calibrating their trust to the stakes of the domain.

That meant the design could not lead with capability. It had to lead with honesty. The system needed to be visibly clear about what it was doing, consistently transparent about its limitations, and architecturally committed to one principle above all others: the patient is always in control, and they can always reach a human.

Trust in this context is not a feature to add. It is the medium the design swims in. Every interaction pattern, every piece of UI copy, every decision about what the AI says and how it says it is either building or eroding it.

The Human Escalation Decision

One of the most important design decisions in this project was also the simplest: always make the path to a human visible, present, and easy. Not buried in a menu. Not available only when the AI fails. Always there.

This was not just an accessibility or safety feature. It was a communication. An always-visible escalation path tells the patient something before they ever need to use it: this system knows it has limits, and it respects yours. That message, sent through structure and presence rather than words, is foundational to the kind of trust we needed to build.

There is a design pattern in healthcare technology that hides human support behind chat flows, FAQ walls, and cost-reduction logic. That pattern may be operationally efficient. It is consistently trust-destroying. We went the other direction: the human option is prominent not because it will be used most of the time, but because its presence changes how the whole system feels to use.

A patient who never clicks the escalation button still benefits from seeing it there. That matters.

What We Built

The design concept centered on a translation layer that sits within the patient portal interface without replacing or obscuring the clinical record. Patients can see the original clinical text and trigger a plain-language translation on demand. It does not auto-replace, because some patients want the original and providers care about patients seeing the actual record. The choice is the patient's.

Our six-domain UX strategy guided every design decision: transparency through plain-language explanations of AI reasoning; user control with override capabilities; smart error handling with clear recovery paths; balanced personalization that avoids overwhelming; universal accessibility through WCAG compliance and mobile-first design; and trust and privacy through HIPAA compliance with AI positioned as a helpful assistant, not a replacement.

  • On-demand plain-language translation: converts clinical text to accessible language when requested, without overwriting or hiding the source record
  • Audio reading: translated content can be read aloud, extending access to patients with low literacy, visual impairment, or who simply prefer listening
  • Persistent human escalation: a visible, always-available path to a care coordinator or support staff member, present in every state of the interface
  • Transparency about AI limitations: the system is explicit that translations are AI-generated, that clinical decisions should involve a provider, and that accuracy cannot be guaranteed for every term
  • Reading level indicator: patients can see what level the plain-language output targets, making the translation itself legible

The multimodal approach, text plus audio, was not an add-on. Health literacy and auditory preference are not the same thing, but they often intersect. Offering both from the start means the system serves a broader range of users without requiring anyone to self-identify as needing a different format.

What We Learned

Skepticism is a design signal, not an obstacle

Every moment in our research when a participant expressed doubt about AI in healthcare was a design brief. The question was not how to overcome that skepticism. It was how to design a system that deserves to have it met with trust over time. Those are very different projects, and most AI product design in healthcare mistakes the first for the second.

The gap is linguistic, not cognitive

Users who struggled to understand clinical language were not struggling to understand their health situation. They were struggling with the code the institution had chosen to write in. That reframe is important. A translation tool that treats users as capable adults navigating an unnecessarily opaque system is a different product than one that treats them as needing help because they are not smart enough. Different tone, different copy, different assumptions. We tried hard to build the former.

Team design surfaces assumptions faster

Working collaboratively on this project meant that assumptions I did not know I was making got surfaced early. Another team member pushing back on a design decision about when to show the escalation option, and being right, was a better education in the value of diverse perspectives in design than any reading could have been. Solo work is faster. Not better.

On AI in High-Stakes Domains

This project clarified something for me professionally. There is a version of AI product design that chases capability: what is the most impressive thing you can make it do? And there is a version that chases fit: what is the right thing for this system to do in this context for these users, and how do we build that honestly?

In high-stakes domains like healthcare, legal, financial, and mental health, capability without fit is dangerous. The pressure to automate, to impress, to reduce cost through AI intervention runs directly against what users in those domains actually need, which is accuracy, transparency, and control. The design decisions that matter most in those contexts are the conservative ones: what the system does not do, what it defers, what it makes visible rather than hiding.

I want to continue working in this space, not because it is easier (it is not), but because the design problems are the right size and the stakes make the work matter.

Skills Demonstrated

Primary research design (interviews + surveys)Research synthesis and insight developmentTrust-centered design frameworksAI product design for high-stakes contextsInformation architecturePlain language content strategyMultimodal interaction design (text + audio)Human escalation path designEthics of AI in healthcareCollaborative design and team research