I’ve watched product teams build entire analytics dashboards that told them nothing useful. Page views up. DAU stable. Everyone nods in standup. Meanwhile, users can’t complete the one task that matters — and nobody notices because “engagement” is green.

This happens more than you’d think.

At Zendesk, I used Google’s HEART framework to structure UX metrics for trust and compliance surfaces. My instinct was the same as everyone else’s: track engagement. But engagement wasn’t the question. The question was task success — could a user actually complete a compliance action correctly, without getting confused halfway through? Once I reframed around that, the metrics changed. And then the roadmap changed.

What HEART actually measures (pick two)

The HEART framework was created at Google by Kerry Rodden, Hilary Hutchinson, and Xin Fu. It gives you five categories of user-centred metrics, paired with a Goals-Signals-Metrics process for deciding which ones matter. The full methodology is on heartframework.com.

CategoryWhat It MeasuresExample Signals
HappinessUser attitudes, satisfaction, perceived ease of useNPS, satisfaction surveys, CSAT
EngagementDepth of involvementVisits per user per week, features used per session
AdoptionNew users picking up the product or featureNew signups, upgrades, first-time feature usage
RetentionExisting users coming backActive users over time, churn rate, renewal rate
Task SuccessEfficiency and effectivenessCompletion rate, time to complete, error rate

You don’t need all five. Honestly, trying to track all five is how you end up with a dashboard nobody looks at. Pick one or two that matter for where your product is right now.

Work backward from goals, not forward from dashboards

The HEART categories only become useful when you run them through the GSM process:

  1. Goals — What’s the team actually trying to achieve? Not “increase pageviews” — that’s a metric pretending to be a goal. A real goal sounds like: “help users find relevant content faster.”
  2. Signals — What user behaviour would tell you whether you’re getting closer? What can you actually observe happening (or not happening) in the product?
  3. Metrics — Turn those signals into something you can track. Decide what number you’ll watch and — this is the part people skip — what size of change is worth acting on.

The mistake I see constantly: teams jump straight to metrics. They end up with dashboards full of numbers that don’t connect to any decision anyone’s going to make. GSM makes you work backward from the thing you care about. It’s annoying but it works.

The categories that matter change as the product matures

For Meitheal, the most important categories right now are Task Success (can you capture and complete a task without friction?) and Retention (do people keep coming back, or does it join the graveyard of abandoned productivity tools?). Happiness metrics? Not useful yet. Users will forgive rough edges if the core workflow actually works.

A few things I’ve learned the hard way:

Pick two categories. Not five. Not three. Two. If everything is a priority, nothing is. Two HEART categories per product area gives you enough focus for a planning cycle without drowning in data.

Signals before metrics. If you can’t describe the user behaviour you’re looking for — like, actually describe it in a sentence — you’re not ready to build a dashboard. You’ll just end up instrumenting everything and still not knowing what changed.

Task Success is the most underrated category. Teams over-index on Engagement and Adoption because they’re easy to count. Task Success is harder to instrument. It’s also the one that tells you whether the product actually works, which is sort of the whole point.

Revisit quarterly. The right categories shift as the product matures. Early-stage? Adoption. Growth stage? Retention and Happiness. Don’t lock in and forget — I’ve seen teams measure the wrong thing for two quarters because nobody thought to revisit.

When your user base is too small

HEART assumes you’ve got enough users to measure meaningful patterns. For internal tools with 10 users? Just instrument Task Success and skip the rest. Seriously. You’ll learn more from watching someone use the product for 5 minutes than from any dashboard.

Also worth noting: B2B enterprise products often have totally misleading Engagement numbers. Usage is driven by mandate, not preference. Someone using your tool 8 hours a day doesn’t mean they like it — it means their job requires it. In those contexts, Happiness (satisfaction surveys) and Task Success are more honest measures than engagement will ever be.

The framework doesn’t matter nearly as much as the discipline of working backward from goals. If you’re measuring everything, you’re measuring nothing. Pick two. Connect them to behaviours you can actually observe. Build a dashboard that answers one question: is this product helping people do their job, or is it just generating metrics?

Want to discuss how I set up metrics for trust and compliance surfaces? Get in touch.

  • I start with the Signal Scorecard to decide which customer signals to act on
  • AI Production Readiness includes adoption instrumentation that produces the data HEART measures
  • RICE/DRICE decides which features to invest in measuring with HEART in the first place

Further Reading