Let’s say you run x campaign, and your metrics improve by Y. How do I verify what else is causing Y and to what extend X actually impacted the metrics.
Isolating campaign impact is tricky, and there's rarely a perfect 1-to-1 correlation between what you did and what moved the needle. Here are some ways to navigate this.
Start by building a clear baseline before your campaign or launch by assessing:
Normal conversion rates
Typical sales cycles
Standard deal sizes and win rates
Then look for specific patterns or changes in behavior once you’ve launched. For example, when we created a new value selling framework, we didn't just track overall sales cycles. We compared deals that used the framework versus those that didn't to gauge impact.
If your win rate or deal velocity jumps right after launching a new sales approach or a sales play focused on a competitor, and Sales is specifically citing those initiatives in their feedback, you've probably found a real connection. Perfect attribution is tough, but consistent patterns help confirm that you’re on the right track.
I think the most simple answer is to stop running the x campaign to see if y still increases without the x campaign running. But let's say you don't want to do that because it's a critical campaign and you don't want to impact Y (improvement) if you don't have to. Instead of isolating that variable you might try one of two things:
Adding more fuel to the x campaign to see if it continues to improve your Y output, which could be an indicator of correlation
Or you could try spinning up an adjacent campaign to x (let's call it z) and seeing if the z campaign is able to increase Y even further.
Either way, I fully endorse and encourage experimentation when it comes to PMM and marketing at large. This is the fun part. Seeing what messages work, what doesn't, iterating and coming up with new creative campaigns.