← Work
AI / UX Research2025UX 60521 - AI in UX/HCI Design · Kent State MS UX · Team Project

Health Translator:
AI That Earns Its Place

Primary ResearchInterviewsSurveysTrust-Centered DesignInformation ArchitectureMultimodal InteractionPlain Language

The Problem

Medical records and patient portal content are written at a graduate reading level. The average American adult reads at an 8th-grade level. That gap isn't a technicality - it means that millions of patients receive test results, diagnosis summaries, and care instructions they genuinely cannot parse. They log into portals, see clinical language, and either guess at meaning, give up, or wait to ask a doctor at a follow-up that may be weeks away.

Health literacy is a social determinant of health. Poor health literacy correlates with worse outcomes, lower medication adherence, and higher rates of preventable hospitalization. The portal - the technology that was supposed to democratize access to personal health information - often makes this worse, not better. It moves the record closer while keeping it just as opaque.

Our team set out to design an AI-powered plain-language translation layer for healthcare portals: something that converts clinical language to accessible language on demand, meets patients where their actual literacy level is, and doesn't require them to do anything except ask.

The Research

We grounded this project in primary research - interviews and surveys with patients and healthcare users - because the design space here is genuinely different from what the literature suggests it is. People don't talk to researchers the same way they talk to their doctors. We wanted to hear directly what was frustrating, what was confusing, and - most importantly - what would actually feel safe to use.

What came back from the research was clarifying. Participants weren't confused about what they didn't understand. They were frustrated that understanding felt inaccessible. The distinction matters. They knew the language was technical. They didn't lack intelligence - they lacked a key. When given one, they used it.

We also heard consistent anxiety about the portal itself: about what records they might find, about whether they'd be notified before their doctor called, about reading a value without knowing if it was normal. The AI translation concept landed well in research sessions - but only when participants understood what it was doing and felt like they could override or question it. That nuance became the entire design problem.

The Design Challenge: Trust

Healthcare is high stakes. That sounds obvious until you try to design an AI feature for it and realize how thoroughly the stakes change every assumption you might bring from consumer product design.

In low-stakes contexts - music recommendations, search autocomplete, email drafts - AI errors are annoying. In healthcare, an AI that mistranslates a lab result or softens language about a serious finding isn't a bad user experience. It's a potential harm. Participants in our research understood this intuitively. When we probed their skepticism about AI in healthcare, they weren't being irrational or technophobic. They were doing exactly the right thing: calibrating their trust to the stakes of the domain.

That meant the design couldn't lead with capability. It had to lead with honesty. The system needed to be visibly clear about what it was doing, consistently transparent about its limitations, and architecturally committed to one principle above all others: the patient is always in control, and they can always reach a human.

Trust in this context isn't a feature to be added. It's the medium the design swims in. Every interaction pattern, every piece of UI copy, every decision about what the AI says and how it says it is either building or eroding it.

The Human Escalation Decision

One of the most important design decisions in this project was also the simplest: always make the path to a human visible, present, and easy. Not buried in a menu. Not available only when the AI fails. Always there.

This wasn't just an accessibility or safety feature. It was a communication. An always-visible escalation path tells the patient something before they ever need to use it: this system knows it has limits, and it respects yours. That message - sent through structure and presence rather than words - is foundational to the kind of trust we needed to build.

There's a design pattern in healthcare technology that hides human support behind chat flows, FAQ walls, and cost-reduction logic. That pattern may be operationally efficient. It is consistently trust-destroying. We went the other direction: the human option is prominent not because it will be used most of the time, but because its presence changes how the whole system feels to use.

The patient who never clicks the escalation button still benefits from seeing it there. That matters.

What We Built

The design concept centered on a translation layer that sits within the patient portal interface without replacing or obscuring the clinical record. Patients can see the original clinical text and trigger a plain-language translation on demand - it doesn't auto-replace, because some patients want the original and some providers care about patients seeing the actual record. The choice is the patient's.

  • On-demand plain-language translation - converts clinical text to accessible language when requested, without overwriting or hiding the source record
  • Audio reading - translated content can be read aloud, extending access to patients with low literacy, visual impairment, or who simply prefer listening
  • Persistent human escalation - a visible, always-available path to a care coordinator or support staff member, present in every state of the interface
  • Transparency about AI limitations - the system is explicit that translations are AI-generated, that clinical decisions should involve a provider, and that accuracy cannot be guaranteed for every term
  • Reading level indicator - patients can see what level the plain-language output targets, making the translation itself legible

The multimodal approach - text plus audio - wasn't an add-on. Health literacy and auditory preference are not the same thing, but they often intersect. Offering both from the start means the system serves a broader range of users without requiring anyone to self-identify as needing a different format.

What We Learned

Skepticism is a design signal, not an obstacle

Every moment in our research when a participant expressed doubt about AI in healthcare was a design brief. The question wasn't how to overcome that skepticism. It was how to design a system that deserves to have it met with trust over time. Those are very different projects, and most AI product design in healthcare mistakes the first for the second.

The gap is linguistic, not cognitive

Users who struggled to understand clinical language were not struggling to understand their health situation. They were struggling with the code the institution had chosen to write in. That reframe is important. A translation tool that treats users as capable adults navigating an unnecessarily opaque system is a different product - with different tone, different copy, different assumptions - than one that treats them as needing help because they're not smart enough. We tried hard to build the former.

Team design surfaces assumptions faster

Working collaboratively on this project meant that assumptions I didn't know I was making got surfaced early. Another team member pushing back on a design decision about when to show the escalation option - and being right - was a better education in the value of diverse perspectives in design than any reading could have been. Team project work done solo is faster. It's not better.

On AI in High-Stakes Domains

This project settled something for me professionally. There's a version of AI product design that chases capability - what's the most impressive thing we can make it do? And there's a version that chases fit - what is the right thing for this system to do in this context for these users, and how do we build that honestly?

In high-stakes domains - healthcare, legal, financial, mental health - capability without fit is dangerous. The pressure to automate, to impress, to reduce cost through AI intervention runs directly against what users in those domains actually need, which is accuracy, transparency, and control. The design decisions that matter most in those contexts are the conservative ones: what the system doesn't do, what it defers, what it makes visible rather than hiding.

I want to keep working in this space. Not because it's easier - it isn't - but because the design problems are the right size and the stakes make the work matter.

Skills Demonstrated

Primary research design (interviews + surveys)Research synthesis and insight developmentTrust-centered design frameworksAI product design for high-stakes contextsInformation architecturePlain language content strategyMultimodal interaction design (text + audio)Human escalation path designEthics of AI in healthcareCollaborative design and team research