Measuring Success Is, Easy Understanding It Is Hard

· 719 words · 4 minute read

The Problem: We See the Results, But Not the Cause

In many organisations, we have dashboards full of data, targets agreed upon, and a clear definition of success. We know when we’ve hit our numbers. But there’s a bigger question we often fail to answer:

What actually made the difference?

Far too often, we see a spike in performance but struggle to pinpoint why. Was it:

  • A seasonal shift in demand?
  • A well-timed marketing campaign?
  • A better customer experience driving retention?
  • A competitor making a costly public mistake?
  • A flash sale temporarily boosting numbers?

When businesses aren’t aligned on how to track impact, they can’t say for sure what caused the success—and that’s a huge problem.

If you don’t know why something worked, you can’t repeat it. Even worse, you might start undoing the good without realising it.

The Reality: We’re Always Doing and Undoing Without Knowing

Without a structured way to link actions to outcomes, organisations fall into a frustrating cycle:

  1. We try new initiatives – new features, new campaigns, new optimisations.
  2. Something works, but we don’t know what – we see a spike, but can’t isolate the cause.
  3. We keep adding more changes – without pausing to understand what’s driving impact.
  4. We accidentally undo the good – because we don’t realise which elements were responsible for success.

The result? A constant loop of effort without insight. Teams work hard, but without knowing what’s really making a difference, the business can’t scale its success.

How to Break the Cycle and Truly Understand Impact

To stop this cycle, businesses need to adopt a more disciplined approach to measuring causation, not just correlation. Here’s how:

1. Change One Variable at a Time

When too many changes happen at once, it’s impossible to know which one made the difference. Instead, structure your initiatives so you can isolate impact.

  • Example: Instead of launching three marketing campaigns at once, test one at a time.
  • Example: Instead of overhauling your entire customer journey, tweak a single stage and measure the result.
  • Example: Instead of discounting across all products, test different approaches in controlled segments.

If you don’t control your variables, your data tells you nothing.

2. Use A/B Testing as a Habit, Not an Afterthought

Most companies talk about A/B testing but don’t use it systematically. If you want to know what works, you need to test variations constantly, measure results rigorously, and make decisions based on data, not assumptions.

  • Marketing: Test different messages and track engagement.
  • Product: Roll out new features to a subset of users before scaling.
  • Operations: Trial new processes in a single department before full adoption.

A/B testing turns “we think this worked” into “we know this worked.”

3. Establish Control Groups for Everything

Want to know if your new strategy actually moved the needle? Hold back a segment of customers, markets, or teams and compare results.

  • Promotion: Keep a portion of your audience excluded and compare their behavior.
  • Customer Journey: Leave part of your site or app unchanged to see if the update made a real difference.
  • Customer Service: Keep one region as a baseline when investing in improvements.

If you don’t have a control group, you’re guessing, not measuring.

4. Align the Organisation Around Cause and Effect

In too many businesses, different departments work in silos, launching initiatives without sharing insights. This creates a fragmented view of what’s driving performance.

  • Example: Marketing might think the ad campaign drove sales—while Product assumes it was a UX change.
  • Example: Sales might credit their new strategy—while Finance attributes the growth to seasonality.

To fix this:

  • Centralise Key Metrics: Everyone should work from the same dataset.
  • Cross-Functional Review Sessions: Discuss results together.
  • Define Success Upfront: Agree on how impact is measured before launching initiatives.

The Bottom Line: Metrics Mean Nothing Without Understanding

Tracking performance is step one. Understanding what’s driving it is step two—and that’s where many organisations fail.

Without a clear way to connect actions to outcomes, businesses waste time, resources, and opportunities. They see success, but they can’t replicate it, scale it, or sustain it. Worse, they risk undoing their own progress simply because they don’t know what really made the difference.

If you want a truly data-driven, high-performing organisation, don’t just measure results—measure impact. Because once you know what works, you can do it again. And again. And again.

Jack James Jack James

Like my better-looking co-founder, I’m originally from the UK, but Auckland has been home for nearly a decade, where I've had the pleasure of working across finance, media, telecommunications, government and an international airline. Newly married and planning a future here, I’m all about balance—starting the day with Allpress coffee and ending it with a Waiheke or Hawke’s Bay red. When I’m not working, you’ll find me on the tennis court with my wife, surfing, playing football, or pushing myself in group fitness sessions. And when I finally slow down, I dive into creative writing—whether blog posts or my ongoing pursuit of a sci-fi novel.