What is your approach to testing different messaging and your criteria for success?
As a rule of thumb, you should always test your messaging. The level of rigor and criteria for success should be sized appropriately the higher you go in the messaging hierarchy.
For this answer, I’ll use a pretty simple messaging hierarchy:
- Company-level: Who are you and what do you do?
- Use case-level: What problems do you solve?
- Buyer-level: How do you deliver value to a specific buyer (e.g. CMO)?
- Capability-level: How do you enable the buyer to get that value?
In every case, you will never regret testing the message with a customer on a call or email. You will always learn something, and if they love it you will gain conviction and ammo to defend that messaging.
- Company-level
- Rigor: Go super high. Use qualitative and quantitative research. Test with customers, analysts, and prospects. Spend the time to align with a broad set of internal stakeholders in GTM and Product.
- Success: Strong positive signals from user research. Alignment to company strategy and differentiation. If you’re messaging is a key input when the company makes big decisions, it’s a huge success.
- Use case-level/Buyer-level
- Rigor: High. Quantitative can help you narrow the use case list and qualitative can drive the clarity you need to understand the ‘why.’ It’s most critical that this aligns with your executive team and GTM teams (these need to be the use cases that they actually see in the market).
- Success: Strong positive signals in marketing and sales channels. Are people resonating with your pitch? Is the content focused on these use cases driving conversions? If your messaging helps accelerate people connecting with your solution, it’s a winner.
- Capability-level
- Rigor: Medium (can be low for small features). You really are aiming for clear and concise messaging at this level. Don’t go for flourish, go for clarity.
- Success: People just get it. If a capability is well explained, your messaging should be the de-facto way that sales, product, and marketing use to describe it.
This really depends on the channel: For websites and demand gen, you can always use A/B testing to determine what works, but for messaging further down in the funnel, tracking interactivity with different content on your website is helpful and then even further down the funnel are customer presentations and demo scripts. Here it's helpful to have a good relationship with Sales to ask for constant feedback on what is resonating with customers and what isn't. Keeping track of win loss rates can also help track the effectiveness here.
Lastly, for new features or products by current customers, track both attendance to launch webinars as well adoption rates to determine if messaging about value, impact and performance are resonating with customers.
We use varioius external forums for testing messaging. We do a lot of informal conversations with customers but we also have formal customer and product advisory boards where we preview messaging. I also like to get analyst feedback before launching anything. These forums usually provide useful insight to improve messaging substantially..
There's both qualitative and quantitative ways to test messaging, and in my opinion both are equally important in the long run.
The qualitative approach involves directly validating messaging with your target audience by:
1. Joining sales calls, pitching the product yourself, and seeing how they respond.
2. Interviewing your sales and customer success team to understand which aspects of the messaging you've drafted for them are resonating.
3. Listening to recorded customer conversations. At Modern Treasury, we're huge fans of Gong. I spend about 2 hours every week listening to Gong calls.
4. In-person events are a great place to test messaging. Some of the best signals I've received in the past have come from conversations at events.
5. Trade publications, newsletters, and other industry media that your target audience consumes are great places to find new messaging ideas!
The quantitative approach involves:
1. Buying LinkedIn or display ads targeted at the roles that comprise your ICP and running different messages for the same CTA to measure relative performance. This can get expensive though, because you need to reach a large enough number to achieve significance.
2. A/B testing copy on your website. You need to make sure you can get enough unique views over your test period to achieve significance.
3. Similar to 2, email campaigns are also a great way to test different copy.
If you're at a startup that's just found p/m fit, quantitative testing can be harder to do because your sample sizes might be too small. In that case, I recommend starting with qualitative approach first!
My messaging testing strategy utilizes a combination of quantitative and qualitative methods. I leverage A/B testing frameworks and advanced analytics tools to optimize website experiences and demand gen campaigns. For deeper insights, I conduct user research through surveys and customer interviews, track user interactions with content on our website and social media platforms, and collaborate with sales teams to capture their feedback and analyze win-loss data.
Driven by data, I evaluate success through a combination of statistically significant improvements in key metrics such as CTR, CVR, and lead generation. Additionally, I consider positive customer sentiment and strong NPS scores to gain a holistic view of messaging effectiveness.