← Work
UX Research2024UX 60501 - Foundations of UX · Kent State MS UX

Card Sort Analysis:
Methodology in Non-IA Contexts

Card SortingInformation ArchitectureNavigation TaxonomyLyssnaSimilarity MatrixUX Audit

The Study

Card sorting is a foundational UX research method for information architecture work. It reveals how users mentally organize content and surfaces the vocabulary they use to describe it. The canonical use case is navigation menu design: give participants content cards, ask them to group similar items, and use the resulting clusters to inform your IA.

This study applied an open card sort to AmeriCorps navigation taxonomy using Lyssna. I selected 30 terms drawn from AmeriCorps.gov content and recruited 10 participants through Lyssna's panel, all based in the United States. Participants created their own category labels and sorted all 30 cards into groups of their choosing. The study also included a follow-up question: "How would you help your community if you could?"

The goal was to surface how potential volunteers and applicants mentally structure the AmeriCorps mission, programs, and services: what belongs together, and under what conceptual umbrella.

Methodology

The 30 card terms were generated through analysis of AmeriCorps.gov's content and navigation structure: Community, Health, Education, Disaster, Environment, Outreach, Service, Program, Volunteer, Skill, National, Support, Development, Youth, Training, Mentor, Housing, Relief, Initiative, Full-time, Part-time, Veterans, Security, Engagement, Inclusion, Team, Employment, Social, Impact, and Connection.

Because this was an open sort, participants created their own category names. This produced significant lexical variation: 48 unique category labels across 10 participants. Only one label, "Work," appeared as an exact match across multiple participants. That divergence is the point. Open sorts trade clean agreement for honest mental models. The vocabulary participants choose tells you as much as the groupings they create.

I performed a physical card sort first using sticky notes to develop my own baseline taxonomy, then compared those clusters against the Lyssna results. The comparison between my four-category model and the 48 participant-generated labels became the core analytical challenge of the study.

Similarity Matrix Results

Lyssna's similarity matrix revealed where participants agreed most strongly, regardless of the labels they used. The strongest pairings:

  • Full-time, Part-time, and Employment grouped together at 100% agreement. Every participant placed these three cards in the same category.
  • Skill and Education paired at 90% agreement, a surprisingly strong signal given how differently participants labeled their categories.
  • At 80% agreement: Engagement/Support, Health/Housing, Health/Disaster, and Disaster/Environment.
  • The Community cluster (Community, Inclusion, Engagement, Support) showed 60-80% internal similarity, indicating strong conceptual cohesion around social infrastructure.

Three cards were statistical outliers with low, inconsistent similarity scores: Impact, Initiative, and Team. These terms have multiple valid meanings depending on context. "Impact" could refer to organizational outcomes or community effects. "Initiative" could mean a specific program or a personal quality. "Team" could describe something you join or something you build. Ambiguous cards generate noisy data. That finding itself is useful: it flags terminology that would perform poorly in navigation labels.

What the Clusters Revealed

Synthesizing the similarity matrix with the participant-generated categories, I consolidated the results into four clusters. These emerged from both the quantitative similarity data and thematic analysis of the 48 category labels participants created.

What We Help (6 cards)

Environment, Disaster, Veterans, Social, National, Youth: the causes AmeriCorps addresses. Participants consistently separated these from organizational mechanics, treating them as the primary identity layer of the brand. The Disaster/Environment pair showed 80% similarity, and Youth was strongly associated with the Demographics meta-category at 50%.

What We Build (6 cards)

Development, Impact, Program, Skill, Initiative, Team: the outputs of service. This cluster captured how participants understood AmeriCorps's value proposition to the volunteer: you develop skills, you build something, you have measurable impact. Skill and Education paired at 90%, the second-highest similarity score in the entire matrix. Impact, Initiative, and Team were the weakest cards here, with low similarity scores and inconsistent placement across participants.

What We Need (12 cards)

Relief, Security, Community, Education, Support, Housing, Health, Training, Engagement, Inclusion, Employment, Connection: the largest cluster, and the most telling. Participants grouped these as needs the communities served have, not organizational priorities. Health/Housing paired at 80%. Engagement/Support paired at 80%. The cluster functions as an empathy map of the populations AmeriCorps works with. When I narrowed the 48 participant categories into consolidated groups, the Needs, Community, and Education meta-categories all fed into this cluster.

How We Work (6 cards)

Volunteer, Outreach, Mentor, Service, Full-time, Part-time: the mechanics of participation. Full-time/Part-time/Employment hit 100% similarity, the only perfect agreement in the study. Participants distinguished how AmeriCorps operates from what it addresses or builds, treating process and structure as a separate conceptual layer. The "Work" label was the only exact category match across participants, reinforcing how clearly this cluster stood apart.

What This Reveals About AmeriCorps.gov's IA

The card sort exposed a structural mismatch between how AmeriCorps.gov organizes its content and how users think about it. Participants consistently organized terms around a purpose-first mental model: what does this organization address, what does it build, what do communities need, and how do people participate? That is four conceptual layers, cleanly separated.

AmeriCorps.gov does not reflect this structure. The site mixes program types, causes, and participation mechanics into overlapping navigation paths. A user trying to find disaster relief volunteering has to navigate organizational taxonomy (NCCC, VISTA, State and National) rather than cause-based taxonomy (Disaster, Environment, Veterans). The card sort says users think in causes first and program structures second. The site is built the opposite way.

The ambiguous-card problem also has IA implications. Terms like "Impact," "Initiative," and "Team" scattered across participant categories because they carry no inherent specificity. If these terms appear in navigation labels, they will mean different things to different users. That is not a vocabulary preference; it is a navigational dead end.

Where Navigation Research Ends

Card sorting produced a clear, actionable taxonomy. But while working through AmeriCorps.gov to gather cards and validate the navigation context, a different problem became impossible to ignore: the site's search UX is deeply broken.

You cannot search by keyword. Location search is limited to certain metro areas, excluding anyone in rural or smaller metro regions. Search results don't surface where programs are located, a foundational piece of information for anyone deciding whether to apply. The site regularly times out during search. There are two separate search flows, a program matcher and a program directory, with inconsistent results and no clear guidance on which to use.

These aren't edge-case issues. They sit at the critical conversion point: someone interested in serving has found AmeriCorps, decided they want to apply, and is trying to locate a program. Every friction point in that flow is a lost applicant. These failures most certainly result in people not completing their program search.

The card sort told us how to organize the navigation. The site audit told us that the navigation isn't the problem.

What This Taught Me About Card Sorting

Card sorting is a focused instrument. It answers a specific question well: how should this content be organized? It doesn't tell you whether the navigation is the site's primary usability problem, and it doesn't tell you what happens after the user makes a selection.

The open sort format produced 48 unique categories from 10 participants. That is not a failure of the method. It is the method working correctly: surfacing genuine variation in how people conceptualize a domain. The analytical challenge is collapsing that variation into actionable structure without discarding the signal in the noise. Thematic grouping of participant categories revealed strong conceptual agreement even where exact labels diverged.

Applied to AmeriCorps, the method produced actionable, valid findings about taxonomy. But a practitioner who delivered a navigation recommendation here without flagging the search dysfunction would be handing a client a polished door in front of a collapsing wall. Research methods have scope boundaries. Part of doing them well is knowing when you've found something outside that scope that matters more.

Skills Demonstrated

Open card sort study designLyssna (remote research platform)Similarity matrix analysisCluster analysis and taxonomy developmentInformation architecturePhysical card sort (sticky note method)UX audit alongside formal research methodAmbiguous-term identificationResearch scope awarenessActionable findings communication

Original Submission

View original PDF →