← Work
UX Research2024 / 2026Kent State MS UX (UX 60502, UX 60503, UX 60541)

Usability Testing
Methods

Moderated Usability TestingUsability Brief DesignUnmoderated Remote TestingLoop11Test Plan DesignThink-Aloud ProtocolTask Completion AnalysisMixed Methods

Why This Collection Exists

Usability testing is not one skill. Moderated and unmoderated sessions surface different kinds of data. Writing a usability brief requires a different kind of thinking than running a session. Planning a test for a prototype that does not exist yet is a different problem than evaluating a live product.

These four projects span three courses in Kent State's MS UX program. Each one applied a different usability method to a different product. Together, they demonstrate range across the core skill set that usability work actually requires: facilitating sessions, designing studies, choosing metrics, working with remote testing platforms, and planning evaluations for prototypes still in development.

Moderated Usability Testing: Papa Johns

Course: UX 60541, Evaluation Fundamentals (Spring 2026)

A moderated usability test on papajohns.com with a participant named Alex. The session covered standard e-commerce tasks: finding menu items, signing up for an email list, locating customer service. The most important thing that happened was not what Alex did. It was what I chose not to do.

The Key Moment: Not Correcting the Participant

One task asked Alex to sign up for the Papa Johns email list. The option was visible on screen. Alex did not sign up. I actively resisted the urge to correct him or point him toward the right element. Later in the session, Alex missed the “Customer Service” link in a similar way.

By not correcting the first miss, I was able to identify a consistent pattern in how Alex interprets certain types of UI elements. Had I corrected him early, that pattern would have been obscured. The second miss confirmed it was not random. It was a reliable behavior that pointed to something specific about how this user reads and prioritizes link-style elements on the page.

What This Taught Me About Facilitation

The Handbook of Usability Testing emphasizes mindfulness as a facilitation skill: the ability to avoid subconsciously leading participants through body language, tone, or timing cues. This is a learnable skill that many practitioners do not realize they need. Running this session made the concept concrete. The instinct to help is strong. Resisting it is where the data lives.

Even well-implemented, well-established sites have edge cases that moderated testing can illuminate. Alex approached multiple tasks differently than expected, which reinforced the value of testing from all angles and treating moderated usability as valuable throughout a product's lifecycle, not just at launch.

Usability Brief Design: Chipotle

Course: UX 60541, Evaluation Fundamentals (Spring 2026)

For this project, I ordered from Chipotle.com for the first time and found the experience seamlessly well-designed. That created an interesting challenge: when something works well, it is harder to build a study. The usability brief had to identify what was worth measuring even in the absence of obvious friction.

Reframing Around ROI

The key question I worked through: what matters most to the business? Completion rate, which directly equals sales, not just ease of use. Even a perfectly usable site that does not convert is not serving its purpose. This reframing shaped the entire study design. The primary metric was not satisfaction or time-on-task. It was whether the user completed the order.

Mixed-Methods Recommendation

Quantitative: Success Rate Analysis. Objective, directly quantifiable, and stakeholder-friendly. The measure: did the user complete the task and place an order? A directly quantifiable metric that stakeholders respond to because it maps to revenue.

Qualitative: Think-Aloud Study. Explains why the data looks the way it does. Reveals mental models and friction points invisible to analytics alone.

The recommended approach: have users order a specific combination (quantitative measurement), then run think-aloud sessions with different users (qualitative depth). The quantitative data tells you what is happening. The qualitative data tells you why.

Unmoderated Remote Testing: Loop11

Course: UX 60502, Usability (Fall 2024) and UX 60541, Evaluation Fundamentals (Spring 2026)

An unmoderated usability test designed and executed in Loop11, comparing Apple.com and BestBuy.com. The task: find the 13-inch M2 MacBook Air with 16GB RAM on each site. Success criteria were defined by target URLs containing specific product identifiers for each retailer.

Study Design

The test used objective success criteria: participants either landed on the correct product page or they did not. This removed subjective judgment from task completion assessment and made the data clean. The same product across two different information architectures provided a natural comparison condition.

The Familiarity Bias Lesson

I was highly familiar with both Apple.com and BestBuy.com before designing this test. Running it revealed that average users do not navigate these sites the way I expected. The gap between expert familiarity and actual user behavior is exactly what unmoderated testing is designed to surface: real navigation patterns from real users, uninfluenced by a moderator's presence or an expert's assumptions about how a site “should” be navigated.

Unmoderated Testing at Scale

The Module 7 deliverable in UX 60541 extended this work into a substantial unmoderated testing project covering test design, task scenario creation, quantitative metrics (task completion, time on task, error rates), qualitative analysis of participant behavior, and synthesis of findings into recommendations.

Usability Test Plan: Adobe Express Prototype

Course: UX 60503, Fundamentals of Interaction Design (Spring 2025)

Earlier in UX 60503, I had selected Adobe Express as a redesign target because its iOS app UX was cumbersome and uninspiring despite extensive personal use. I developed paper prototypes covering core tasks: uploading a photo and adding an empty layer, selecting and placing fonts, and adding arcing text. The prototype process led to scrapping unnecessary buttons and iterating through multiple interface refinements.

The final assignment was planning a usability test for that digital prototype. Testing a prototype that does not yet exist as a shipped product requires different thinking than evaluating a live site. The test plan had to account for prototype fidelity limitations, define tasks that were achievable within the prototype's scope, and anticipate where participants might hit the boundaries of what the prototype could simulate.

What I Learned Across Methods

Each method reveals something the others cannot. Running all four across different products and contexts made the distinctions concrete rather than theoretical.

Moderated Testing Reveals Patterns

The Papa Johns session demonstrated that moderated testing's primary value is not task completion data. It is the ability to observe consistent behaviors across tasks in real time. The facilitator's discipline, specifically the decision not to intervene, is what makes those patterns visible.

Usability Briefs Force Strategic Thinking

The Chipotle brief required thinking about what is worth measuring before any data is collected. The shift from “what can we test” to “what matters most to the business” changed the entire study design. That reframing, from usability metrics to business outcomes, is where study design becomes strategic.

Unmoderated Testing Corrects Expert Assumptions

The Loop11 study showed that expert familiarity with a product is a liability when designing tests. Unmoderated testing removes the moderator's influence entirely, which surfaces navigation behaviors that the test designer might never have predicted. The method is strongest when the goal is behavioral data at scale rather than deep qualitative insight.

Prototype Testing Requires Different Constraints

Planning a test for a prototype that does not fully exist yet forces you to think about what the test can and cannot evaluate. The Adobe Express test plan had to work within the boundaries of the prototype's fidelity while still generating useful signal about the redesign's viability.

Skills Demonstrated

Moderated usability session facilitationUnmoderated remote test design (Loop11)Usability brief and study designThink-aloud protocolTask completion analysisSuccess rate measurementMixed-methods study designTest plan writing for digital prototypesParticipant observation without interventionROI-oriented metric selectionComparative site evaluationPaper prototyping

Original Submissions