How does experimentation fit into improving messaging? And if you find something that works in one channel, can you translate it across channels?
I'm a big fan of A/B testing, but it can take a lot of time, and you need to be careful not to change things up-funnel if you want a clean result, which can be paralyzing. The best approach is to get an initial signal on your messaging through other means (like a Message Testing solution), then move on to in-market A/B testing if the study indicates multiple possible winning messages (as a validation exercise). You're also likely to get better segmentation data this way, possibly resulting in unique messages for some of your target audiences.
As a general rule, I think it's good to present a consistent message across channels that target generally the same audience. Your prospects are likely to encounter you in several places during their awareness and consideration journey, and one touchpoint should reinforce the rest. There may be cases where you need to alter your voice to communicate authentically (for example, typcal Facebook or LinkedIn ad copy would feel out of place on Reddit or Twitter), but this shouldn't completely alter your core value statement(s) or brand guidelines.
I’m a big believer in experimentation in any marketing activity. And messaging should be included in that. I see a lot of enterprise product marketers get caught up in taking on a huge messaging and positioning project/revamp that typically cuts across multiple teams and stakeholders, which can feel difficult to pull across the line. These projects take time, but this is where experimentation comes in... it’s easy to test messaging in your digital channels and that's a great place to start. I suggest working with your digital team members or an agency to start out with banner ads and landing pages.
- Define what you’re testing, your target audience and how for long you’re running your experiment
- Ensure you have the proper tools in place to track results. For digital channels, you’d want to work with that colleague mentioned above, but metrics would likely be click-through and conversion
This will give you solid data from which you can make a case to use your learnings and improve messaging across other channels. But I wouldn’t stop there. I really like using my sales team as a way to experiment with messaging. Note that this can get tricky if your company uses a slidesharing tool like Clearslide. I typically tap two reps and equip them with 2 decks (the narrative is what’s different here, product doesn’t change, but how we tee up the landscape, problem statement, etc. does) I usually have them use their respective decks for about 5 first calls each and then gather qualitative feedback from them.
One thing to always be mindful of is the channel you’re using will likely dictate how you tweak your messaging - just because something works in an add doesn’t mean you should use those exact same words in your sales deck.
Experimentation is a key driver of messaging improvement. It's relatively inexpensive to test messaging through ads, which can get you a quick read on which messages resonate most strongly with the audiences you're targeting. This is a fairly scientific method, but you can also experiment with messaging through a sales team, assuming there is one. More qualitative than quantitative, you can test different messages through different flavors of pitch decks that reps deliver. With tools like Gong and Chorus, you can then listen in to determine how prospects receive the messages, and what type of response reps receive.
I think that experimentation results translate across channels, as long as the audience is the same.
I’m a big believer in experimentation before your messaging hits customers. Typically in the product development and go-to-market planning cycles you have at least two, if not more, opportunities to test different messaging. The first is to sneak it into usability research. If a UI designer is going to run a study on how customers comprehend a product flow, the messaging naturally comes up. When possible, attend the first interview or get detailed notes to see how the customer or prospect is talking about the flow and the words they use to describe it. If the first interviewee doesn’t hit the value prop, give your designer new messaging for the next interview to see if there’s improvement.
The second is to run some light messaging research, be that via a customer panel or even a quick couple of calls with some of your target audience. Try different messaging to see what resonates.
And typically, yes, what works in one channel can work in others though often with tweaks. In product messaging is typically heavily word count constrained, relying more on the UI and visuals. With email perhaps you have more room. With blogs and collateral even more.
Experimentation is a critical factor in the success of your story. Because the pace of change is accelerating, user needs and mental models today may greatly differ from just a few months ago. This will impact how successfully your message is being received. What worked a few months back may no longer work today. That’s why you should always be testing, measuring and experimenting.
I like to experiment messages already during the research phase through customer focus groups. The way I would conduct the experiment during this phase is by defining a series of hypotheses based on some internal assumptions or data points. For example, the assumption could be that when travelling for work, Gen Z. and Millennials enjoy spending an extra day for leisure. I would then test a few value propositions in customer focus groups that represent these two groups and measure its effectiveness.
Experimentation shouldn’t stop here. Once you have a good value proposition defined from your research, you now want to test different keywords and call-to-action (CTA). Basically A/B testing.
You can do this by dividing your audience into 3 or more test and control groups. For example, let’s assume that your value proposition is that “a business traveler can earn points while travelling for work and redeem them for leisure”. You can now run an experiment and divide your audience into 3 groups. Group A (33%): “Earn more points when travelling for work - learn more”; Group B (33%): “Higher rewards, best perks - get started”; Group C (33%) this is the control group, you don’t target them with any messaging. The results should give you an indication of what worked and what didn’t for that particular channel.
This does not mean that your results can be easily translated to all channels. You will need to run simultaneous experiments on all channels. Never assume because one message drove higher engagement in one channel that it will have similar success in other channels.
I am personally guilty of making this assumption in the past. One time, after I received very positive results of an email experiment, I decided to use the same CTA for app push notifications. The results were negative, not only was I not able to replicate the success of the email experiment, but I ended observing significant churn because while these users might behave the same way and share the same values, the mindset when reading an email VS getting a push notification is entirely different. You want to be informed when you read emails and you want to be alerted when you get push notifications.
So, my recommendation is to never assume your message will fit all channels the same way. Experiment and test in all channels even if the channels have a similar function (e.g. Social Media) because the mindset of your users on Facebook will likely differ from their mindset on Twitter or LinkedIn.