All Posts
Product Strategy··6 min read

Metrics Theater: Measuring Everything, Learning Nothing

I walked into a product review once. The PM pulled up a dashboard. Forty-seven metrics. Cohort retention, feature adoption, week-over-week growth, DAU trend, NPS by segment, feature-specific funnels, time-to-value curves.

Forty-seven things to look at.

I asked a simple question: "What do you need to do differently based on this data?"

Long pause.

"Um... let me think about it."

That's metrics theater. You've got all the data. You're not learning anything.

The Trap

Measuring things feels like progress. Every metric you add is another way you're getting smarter. Another angle on the business. Another decision point.

Except it's not. It's noise.

Here's what happens: You start with three core metrics. Revenue. Retention. Engagement. You measure them. You get insights. You make decisions. Great.

Then somebody says "but what about feature adoption?" So you add that. Useful. Now it's four metrics.

Then sales wants to know about customer cohort performance. Five metrics. Then engagement by user segment. Six. Then time-to-value. Then funnel analysis. Then—

Now you've got forty-seven. Your dashboard is a work of art. It looks like something a serious company would have. It's definitely not helping you make decisions.

Why More Data Makes You Dumber

There's a phenomenon in data analysis called "multiple comparisons" or "p-hacking." If you run enough analyses, you'll find statistically significant results just by noise. And you'll believe them.

With 47 metrics, something is always going up. Something is always going down. You can tell a story about anything.

Feature adoption is up? Proof the feature is successful. Feature adoption is down? Proof you launched it wrong, so you need a different approach. Either way, you've got data to support your gut instinct.

That's not learning. That's confirmation bias with dashboards.

I watched a team optimize for "time-to-value"—how long it took a new user to first action. They got obsessed with the metric. Shaved 45 seconds off. They felt great. Revenue was flat. Retention was flat. Didn't matter. The metric improved, so the feature was a success.

Six months later, they realized they'd been optimizing for the wrong thing. The thing that actually mattered was whether users got to their second action, not their first. But nobody was looking at that metric because it wasn't on the dashboard.

The Inversion

Here's the inversion: your best metric should be what you're willing to make a decision on.

Not every metric. The three to five things where you've said "if this moves, we change direction."

For a B2B product, maybe it's:

Net revenue retention (NRR) Customer acquisition cost (CAC) Support ticket volume

That's it. Three numbers. Everything else is secondary.

When NRR drops, you're fixing retention. Period. You're not debating whether feature adoption is correlated. You're not looking at segment-specific metrics to see if it's really a problem. NRR is down. Fix it.

This forces prioritization. CAC went up 15%? You're investigating and fixing sales efficiency. The dashboard is quiet on that topic because you're focused.

The discipline isn't having more metrics. It's having fewer and caring about them more.

What To Actually Track

Pick your north star. For most products, it's some variant of:

Growth (new customers, revenue, usage) Retention (how many come back) Economics (how much you make vs. spend)

Depending on your business, one of these matters most. Growth companies optimize for growth. Mature products optimize for retention and economics.

Pick that one. Put everything else in service of it.

Then track 3-5 supporting metrics that you actually influence. Not metrics that move when the north star moves—those are lagging indicators, not levers. Track the things you control that drive the metric that matters.

For a SaaS company, maybe:

North star: NRR Levers: Feature adoption by power users, onboarding completion rate, time to value for enterprise features, support response time

Now you've got a coherent picture. Feature adoption isn't just a vanity metric—it drives retention, which drives NRR.

Everything points somewhere. Everything has a reason.

The Hardest Part

Deleting metrics is harder than adding them. Somebody spent months building the feature adoption dashboard. Somebody lobbied for cohort analysis. You're telling them their work doesn't matter.

But it does matter. It just isn't decision-relevant. And if it's not decision-relevant, it's noise, and noise makes you stupid.

The best thing I ever saw was a company delete 80% of their dashboards. They went from 6 dashboards with 50+ metrics total down to 1 dashboard with 4 metrics.

First two weeks: everyone panicked. "How do we see if X is working?"

After a month: nobody cared. They realized 80% of the metrics they were looking at every week didn't change how they spent time.

Three months later: velocity was up. Decision-making was faster. They weren't dithering about what the data meant because there was less data to interpret.

The Question

Before you add another metric to your dashboard, ask: "If this number moves 20% next month, what would I do differently?"

If the answer is "I don't know" or "probably nothing," delete it.

Build toward clarity, not comprehensiveness. Your dashboard should tell a story. Not give you a thousand facts that add up to confusion.

The best teams know their metrics by heart. They don't need a dashboard. They just know: "NRR is 130%, CAC is $12k, and we're converting at 3%." Those three numbers tell them everything.

Measure the things that matter. Delete the rest. Make decisions faster.