For instance, it's easy to jerry-rig performance to a one-sheeter that was sent in the course of a deal, but I'm having trouble finding ways to measure performance for intangible efforts that improve sales performance but isn't easily attributable to revenue.
5 answers
All related (43)
Pallavi Vanacharla
Head of Marketing, IoT + Sr. Director of Product Marketing, Industries, TwilioMay 4

Not everything we do is measurable and this is especially true for PMMs. 

Unless there is a compelling reason to measure every aspect of sales enablement, I would not suggest you do so.  

But if you must, then for most things, a simple survey to the sales org is best. Ask them how useful a specific task was in their sales cycle? And if you should stop, continue or improve that specific task? Remember to give them an incentive to complete the survey. 

You can also try other indirect means, like measuring the sales success rate of someone who took a training vs. not, but none of these methods are reliable because a lot goes into sales success. 

Justin Graci
Principal Marketing Manager - Product GTM & Enablement, HubSpotNovember 22

Great question.

When it comes to measuring objection handling:

  • Leverage a conversation intelligence tool such as Gong, where you can report on keywords within sales calls and attribute those conversations to deal pipelines.

For training:

  • The best way to measure impact of a training is to run 'workshops' in addition to a training such as an eLearning. For example... I'd recommend reps taking an eLearning that walks through the 'what and the why' and then organize a workshop where they put it to practice to understand the 'how'. With the workshop, create a simple matrix scorecard to judge/grade their effectiveness in putting it to practice. This could be a role play or it could be the rep recording a mock pitch and submitting it to their manager to grade.
  • I'd also recommend looking at 'before' and 'after' snapshots within your reporting. What was the runrate before they took the training, and what did it look like 30, 60, 90 days after?
  • In some cases, depending on the training, you'll be able to better track things. For example, if you lead a training on 'improving win rates against X competitor by leveraging new objection handling techniqes' then you can look at deals that involved that competitor and whether you saw a meaningful increase in win rates. 

At the end of the day, not everything is going to have a direct line to tangible results. But there are more and more solutions on the market to help today, and creative ways to think about it. 

Harsha Kalapala
Vice President, Product Marketing, AlertMedia | Formerly TrustRadius, Levelset, WalmartNovember 2

Qualitative measures require qualitative assessment. I don’t see a way around spending time listening to calls (at 1.75x speed of course) :)

The best way to measure performance is to select a random sample of sales conversations - enough to be quantitative (10+) and assess the intangibles on a score sheet - objection handling, trap setting, etc.). I would try to make the scoring as subjective as possible—not how “ideally” it was delivered, but how the prospect reacted to it.

You can learn a lot from winning conversations. But also a ton to learn from loss conversations. Reducing instances of losses can be an effective way of measuring performance and course correction.

Sarah Din
VP of Product Marketing, QuickbaseNovember 30

The best way is to use sales feedback to show improvement over time. You can easily do that by measuring things like sales confidence using a survey to show performance improvement over time. You can also just run feedback surveys post-training to capture feedback on the training itself.

The other things you can directly attribute to sales enablement can be things like shorter sales cycles over time, improvement in win-rates or improvement in competitive win-rates, etc.

Molly Friederich
Director of Product Marketing, | Formerly Twilio, SendGridApril 27

A mix of qualitative and quantitative data is always the gold standard. 

For training, take time to connect sellers both before and after trainings to ask what questions they have, what they took away, or what they might still need. Mix up who you go to so you hear from both your most and least engaged teammates. Then, for consistent quant data, make simple post-training surveys a standard (required!) part of training so your teams provide you with consistent feedback, and you can learn as you experiment with different formats. 

For call best practices, build opportunities for role playing to build consistency across teammates. It's a great way to create cross-seller visibility of what's working well so others can model. When possible, join calls to hear the real deal, and, if you have access to it, use tooling like Gong to really scale your visibility into calls. You can search for specific call topics, products, sellers, etc. to narrow in on priority motions.