Courses

Course progress0%

Feedback Loops and Metrics That Actually Matter

Vanity Metrics Will Destroy Your Credibility

Not all metrics are created equal. *Vanity metrics* — numbers that look impressive but don't help you make decisions — are one of the most common traps in product management.

How many users do you have? How many page views did you get? How many features did you ship? These numbers look great in a slide deck. They do not tell you whether your product is actually creating value. If a metric only ever goes up and never causes you to change your behavior, it's a vanity metric.

Key Insight: An actionable metric forces a decision. A vanity metric just fills space on a dashboard.

The Physical PM's Feedback Reality

Physical product managers are structurally disadvantaged when it comes to feedback. Their channels are:

  • Market research (pre-launch): Surveys, focus groups, concept testing. Valuable, but based on what people *say* they'll do, not what they actually do.
  • Sales and sell-through data (post-launch): You learn what sold, but not *why*. A drop in units could mean the product is wrong, the price is wrong, or a competitor just launched something better.
  • Customer service data: You learn about what breaks or frustrates users — but only from the minority who bother to complain. Most unhappy customers just leave.
  • Returns and warranty claims: A significant lagging signal that something was fundamentally wrong.

Physical PMs compensate for this feedback deficit by investing heavily in research *before* they commit to a production run. Win/loss interviews, ethnographic observation, prototype testing — these are your data sources, because behavioral data from the market is too slow and too indirect to guide decisions in real time.

The Digital PM's Feedback Superpower (and Trap)

Digital products generate data constantly. Every user action creates a data point. But it comes with a trap: data abundance without a clear question is just noise.

The most effective digital PMs do this:

  • Define the question before looking at the data. "Is our onboarding flow working?" is not a question. "What percentage of users who start account creation complete it within 10 minutes, and where do they drop off?" is a question.
  • Build a metrics hierarchy. You need a *North Star metric* (the one number that best represents your product's value to users), a handful of *driver metrics* (the behaviors that drive the North Star), and guardrails (metrics that tell you if you're causing harm while chasing the main goal).
  • Connect behavioral data to business outcomes. Engagement is not a business outcome. Revenue, retention, and reduced churn are business outcomes.

The Two Frameworks Worth Knowing

Pirate Metrics (AARRR) — Created by Dave McClure, this framework maps the lifecycle of a user through your product:

  • Acquisition — How do users find you?
  • Activation — Do they have a great first experience?
  • Retention — Do they come back?
  • Referral — Do they tell others?
  • Revenue — Do they pay?

For an insurance mobile app, your "activation" might be: did the user successfully file a claim or view their policy details on first session? If only 20% of users who download the app complete activation, that's a critical problem to solve before worrying about retention.

HEART Metrics — Developed at Google, HEART measures:

  • Happiness — User satisfaction (often via survey or NPS)
  • Engagement — How much are users interacting with the product?
  • Adoption — Are new users starting to use the core features?
  • Retention — Are existing users continuing to use the product?
  • Task Success — Can users complete their key tasks efficiently?

HEART is particularly useful when you're measuring a specific feature or product area, not just the overall product.

Applying This to a Hybrid Context

If you're managing something like a telematics insurance product, you're operating in both worlds simultaneously. The physical device has physical PM feedback patterns: field returns, installation complaints, hardware failure rates. The app component has digital feedback: session length, feature usage, NPS, crash rates.

Your job is to track both, but to track them differently. Don't try to apply a 2-week sprint cadence to your hardware iteration. Don't apply an 18-month planning horizon to your app features.

Your immediate action: List the three most important metrics for your current product. For each one, ask: Does this metric *cause* me to make a decision, or does it just make me feel good? If it's the latter, replace it with something sharper.