What using a UX metric actually looks like

When we talk about design impact, we are really talking about alignment. I hear this consistently from you in my exchanges: you want your work to matter to the business.

An initiative touches product, design, engineering, risk, and leadership, and the question becomes simple: did this work help the business move forward, and how do we know?

In the next year, this is going to get messier with AI:

  1. Agents act.
  2. People will intervene.
  3. Decisions start to blur together.
  4. And the ownership will get fuzzy.

So UX metrics give us a fast, consistent way to evaluate our outputs and make sense of this chaos.

What is not confusing is this: people align with people who help the business succeed. If you can consistently show how your work connects to outcomes, people listen. Over time, they trust your judgment.

That is where UX metrics come in.

Start with a concrete example

Let’s look at notification settings in the financial app Robinhood.

Before jumping to metrics, start with a hunch grounded in user need.

As a trader managing notifications, some needs tend to show up quickly. Not all of these need to be explicit, and you may only focus on one or two. But you do need to agree on what matters.

For example:

Notification options should be easy to understand, easy to toggle, and easy to customize without confusion.

In Glare terms, that maps to Usable.

Pick the UX metrics that help you decide

Now we choose metrics that help us understand whether the experience actually supports that need.

For Robinhood’s notification center, the goal is not to measure everything. It is to get a clear picture that helps us decide what to fix, keep, or revisit.

You could choose one metric. You could choose five. The point is usefulness, not coverage.

For this example, we start with Usability

Usability asks a simple question: can people operate this without friction?

Users should be able to browse, filter, and manage notifications smoothly. The metric tells us whether they can navigate the structure and controls without hesitation or confusion.

Turn the hunch into testable questions

The hunch here is that the full-screen takeover shown when entering notification settings may feel abrupt or unclear. Users might expect a detailed settings page, not a binary prompt.

To test that, we need questions that surface behavior and confidence, not opinions in the abstract.

Examples:

  • How clear was it why this notification prompt appeared at this moment?
  • How realistic or trustworthy did the example notifications feel?
  • How comfortable did you feel choosing between the options on this screen?

These questions help us understand whether users know where they are, what they are being asked to do, and what to do next.

Measure behavior, not just sentiment

For usability, we used a multi-task click test.

Participants were asked things like:

  • Where would you tap to turn off marketing notifications?
  • Where would you tap to manage push alerts for portfolio updates?

Success is averaged across tasks to produce a usability score.

In this case, Helio was used to collect the data using the built-in Usability metric. The raw survey and UX for Robinhood’s notification center are available if you want to inspect the details.

Findings: What the metric showed

Usability: 84% (Good)

Participants found the layout clear and easy to operate. The structure of push, email, and message notifications felt intuitive. Most users could locate the settings they needed with minimal hesitation.

On its own, that sounds like a win.

Why one metric is not enough

When we look at usability alongside the other metrics, a more nuanced picture emerges. Robinhood built a notification framework that users understand and want to personalize. Comprehension is high. Intent is high.

But lower success and sentiment scores point to friction under the surface. Nested menus, unclear category labels, or a weak hierarchy between push, email, and message alerts may be slowing people down.

This creates a useful tension.

Users feel aligned with the system at a mental level, but small interaction issues get in the way of smooth execution. That tension is exactly where good design decisions live.

And that is the real value of a UX metric. It does not declare success or failure. It gives you evidence to decide what to do next.

What does it mean? Design Signals

Users know how to manage notifications, but they are less clear on why they should receive certain alerts and what impact those alerts will have. The interface gives control, but little guidance.

*Design Signal-Notification settings should help users decide what deserves attention.

The system is configurable. The meaning is thinner.

Participants expressed their thoughts about the notifications:

“Have to read through details to understand
not easy.”

“It looks like there are two different ways that each item is showing, the toggle switch and then the arrow option to go to another screen. Shouldn’t they all look the same for it to be more uniform and clear?.”


What to Do Instead

This is where a good team designs notification centers around priority and consequence, not just flexibility.

They:

  • Explain why an alert exists before asking users to enable it.
  • Clarify what action, if any, a notification supports.
  • Group alerts by importance, not just category.
  • Reinforce which notifications protect users versus which simply inform.

Instead of asking: “Can users manage notifications?” We should be asking if users:

  • “Do users understand which alerts matter most?”
  • “Does this setup reduce anxiety or increase it?”
  • “Are we helping users stay informed, or just stay interrupted?”

You can now make more informed recommendations based on the UX metrics you collected!

3 Likes

I think a powerful spin off here could be aligning power questions for routine design calls. What are things people hear often and what do they really mean?

This might be cool for people to get their eyeballs on.

1 Like

I mean, I’m already blaming AI for bugs :squinting_face_with_tongue:

Continuing to explore how to make UX metrics easier to work with in practice. I think we can agree that this can be intimidating. The ramp-up looks overwhelming. But the benefits of this approach create significant returns in decision-making.

So in Glare


  • We should frame metrics as early warnings, not scorecards.
    They exist to surface weak decisions before speed locks them in. AI is wrecking havoc on decisions.

  • We only keep metrics that can be tested quickly.
    If a metric cannot inform a near-term decision, it does not belong yet. Our intelligent metrics need to figured out.

  • We look for early signals, not statistical certainty.
    The goal is to reduce risk with UX metrics before teams over-invest time, code, or politics.

  • We should be sharing evidence to replace opinion-driven alignment.
    UX metrics give teams something concrete to react to instead of debating.

  • We define metrics as constraints that protect quality.
    They set boundaries for movement, rather than slowing teams down.

1 Like

Great stuff here @Bryan - it takes a lot of work to get to the good stuff.

I’d be curious which one of these folks find most valuable?

  1. We should frame metrics as early warnings, not scorecards.
  2. We only keep metrics that can be tested quickly.
  3. We look for early signals, not statistical certainty.
  4. We should be sharing evidence to replace opinion-driven alignment.
  5. We define metrics as constraints that protect quality.

A lot of strong, dense points here.

“We only keep metrics that can be tested quickly.
If a metric cannot inform a near-term decision, it does not belong yet.”

@Bryan long term metrics seem inherently important, so when would they start to belong?

2 Likes

I secondly want know the answer to this question!

I believe with performance metrics, yes, they might be more interesting to keep around for longer cycles.

But my sense is that it’s the product and business metrics that have more longevity as lagging indicators. There’s no reason you can’t hold on to a sentiment metric for a while, but my guess is that you’re better off creating new leading indicators in more frequent cycles.

1 Like

Yeah we’ve seen attitudinal metrics change week over week, especially if the business value is in any way tied to current events (which most business’s are). Leading UX metrics are still valuable to refer back to as past benchmarks, but agreed that the value in these ‘fast’ metrics is being able to consistently measure them as the work gets done.

1 Like

While Glare helps teams organize decisions around UX metrics using design signals, I thought it would be interesting to see the activities involved in collecting and interpreting UX metrics.

3 Likes

Interesting and useful!

1 Like

This is a great list. I’m finding it hard to add or take away from any of these pieces. :birthday_cake:

A great visual, I’d ask where does this break? What is the most fragile piece of this that teams should be aware of?

My sense is that for many teams, the challenge is figuring out how to collect the data needed to capture a UX metric. But I’d love to hear from others.

I recently spoke with a customer who was stuck trying to test a prototype with their users. They managed to talk to just three people over three weeks.

Most teams don’t realize they should be making smaller asks of more people, paired with rewards that actually match user interests. Instead, they often do the opposite: long, moderated sessions with tiny sample sizes, all centered on the team’s problems, not the customer’s.

Ya, I agree. I think when you look at all the pieces that need to be in alignment, to get that impactful UX Metric, your org has to be in lock step. I’d be curious too, what people feel like might be the most challenging piece, based on their experience.