What is your process for collecting user feedback?
We collect feedback in various ways. We obviously use Intercom ourselves, so we get user feedback through conversations, responses to our announcement messages and conversation ratings, for example. We also do run NPS surveys, using survey apps integrated into our Messenger. Some of our product team recently recorded this podcast about how user feedback informs what we build, which may be of interest!
We also have an amazing research team at Intercom, who will do research interviews, concept testing or surveys with customers if we're researching something specific or running a beta (you can read more about some of their approaches here ). If we're looking to get feedback from non-customers and it's something with a clear hypothesis, we often use usertesting.com. PMMs will often do this to test things like reactions and comprehension of product messaging for example.
In general, my POV is that any feedback is good feedback and that you should look for it wherever you can get it. That said, some channels are better than others. Here are the main ways I look for feedback from self-serve customers:
- NPS surveys: The responses to this survey can be invaluable for product feedback, general customer sentiment, and areas of friction or confusion. I use the positive responses to identify customers we can ask to give us a positive review on sites like G2. Negative NPS feedback customers are great candidates to reach out to for 1:1 conversations to get further feedback and learn where you can improve.
- Customer surveys: Got a specific question about how your customers feel? Wonder what your most important features are in your customers’ eyes? Wondering how they’d feel about a potential change your team is considering? Send a survey! I love surveying customers and encourage you to set aside some budget to offer an incentive to fill it out to increase response rate. A $20 Amazon gift card is my go-to for small surveys. For larger initiatives I give away five $100 Amazon gift cards to keep the budget from ballooning.
- Community and social engagement: I feel extremely lucky to have so many active members in the Airtable Community that we can lean on for thoughts, ideas, and feedback. Often, just reading the discussion in the community is enough to give me the insight I need but, if I’ve got a specific question, posting it in the community is a great way to get insight from our most engaged users. The same goes with the customers who engage with Airtable on our social channels, they’re often willing to provide quick feedback when asked.
- User interviews and user experience research: Airtable has an extremely broad user base, so when I really need to get insight from a specific group of users, I’ll look to our user experience research team as partners to find a few key folks in our target audience and get them set up in more formal research sessions.
This will depend on why you are collecting the feedback and what you hope to get out of it.
NPS is certainly a powerful tool to get a quick and consistent read on whether your customers are satisfied with your product. However, NPS surveys are generally only a few questions. You’ll find out what’s not working, but you may not get enough detail on how you should fix it.
In this case, I recommend following up on your NPS findings with an in-depth quantitative survey and qualitative interviews to really figure out how to optimize your product.
I use every resource I can to gather customer and user feedback, both internally and externally. First and foremost, I always prioritize talking directly to customers. Make time for customer conversations every month at a minimum. In addition, I leverage many of the following sources of feedback:
- Outreach to target audience about general needs (not product specific)
- Customer support tickets
- NPS and in-product surveys
- Comments from the sales team
- Regular check-ins with the support team
- Direct outreach to frustrated customers
- Online comments, especially on 3rd party sites
Each of these sources provide valuable, often directional insights. Two to three comments on a recent press release may just be noise, when combined with 3 lost sales deals and 8 support tickets in the last week, and a different picture may appear. I will review NPS scores on a monthly basis. It may not be the driver for our 6 month product strategy, but it can highlight some areas we need to monitor or validate moving forward. Often we will need to dive deep into these comments to understand the groups of complaints.
In a previous role I manually went into hundreds of return tickets to understand what was happening. Once I understood the pain points, then we could create a survey with the correct categories. The results played a big role in future product strategy, we needed the deep-dive into the raw feedback before we could create that process. This is why it’s important to have several sources of feedback to validate and triangulate against.
(copying this question from a previous answer)
At Atlassian, we use many methods for understanding customers both qualitatively and quantitatively.
The most standardized, larger-scale tool we use across all of our cloud products is our Happiness Tracking Survey known as HaTS (developed by Google). Our research teams sends out weekly emails to employees who subscribe that give the overall customer satisfaction score and short clips of customer feedback such as what customers find frustrating about our products or what they like best. This is a helpful way to keep customer feedback top of mind.
For more in-depth research on a particular audience, product, or feature, we use Qualtrics to send wider surveys to specific audiences.
On my new products team at Atlassian, we use the Product-Market-Fit score which asks users 'how would you feel if you could no longer use the product?". We usually aim for early products to at least reach 40% of their earlier adopters to say they would be very disappointed. You can read more about the survey here.
We also use a variety of analytics tools to measure our funnel and in-app engagement such as Segment, Amplitude, Redash, Tableau, etc.
On the qualitative side, we run customer interviews, user testing sessions, focus groups, customer advisory boards, etc.
I mentioned this in a previous answer, but I believe the best way to structure research is to develop hypotheses you are trying to prove or disprove before crafting your survey or research questions. You want to be clear about what you're trying to understand. Typically, you can start with customer interviews, a qualitative approach, develop your "hunch" and hypothesis, and then use wider surveys to validate the hypothesis.
There are a lot of great ways to collect user feedback:
-
Focus Groups --> any time you can get a small handful of users (either individually or together) to give some of their time for an extended conversation, take advantage of it! This model works really well when you've got a meaty topic/workflow/problem to work through where you really want to get into the details and opinions.
-
Usage Data --> usage data is great because all users vote with their actions. Getting access to how users are interacting with your product and the actions they take or don't take is invaluable.
-
Reviews --> if you work on an app based product you can often get some pretty blunt insights from the reviews that users leave. Monitoring trends here is important and can sometimes identify blindspots.
-
Surveys --> surveys to your base of users (or out in the wild if needed) can be a really great way to quantify broader trends and validate existing assumptions.
-
A/B Testing --> A/B testing is one of the best ways to collect feedback via user actions. Test UX, messaging, positioning etc. with multiple versions with a well put together experiment to learn more about what resonates or user preferences.
In Product --> in product surfaces are great places to get feedback. Mechanisms that allow users to provide feedback, a rating, or vote with their actions are really useful (and one of the most topical/contextual options).
I spoke about it in another question, but leveraging betas or early access programs is a great setting to collect user feedback with lower stakes than GA.
There are so many ways to collect user feedback these days. Between product surveys, customer interviews, and user testing there are many different tools to wield. It's not always easy to use them all, let alone find the time to do so. As a product marketer, you're also not always across each of those channels, but my best advice is you should do what you can to synthesize their input when building your product strategy with an eye on extracting the specific feedback you and your product needs. Pound for pound, I believe that is one of product marketing's many superpowers.
As I've onboarded myself into different roles over the years, the best way for me to quickly collect product feedback was to go outbound as well as inbound.
Outbound:
This is really my shorthand for saying that you should ask for and make asks around customer feedback. So rolling out product surveys, scheduling customer interviews, or participating in those processes with your sister teams to help frame the questions that will be useful to your gtm strategy will be key. I've often seen survey questions that focus on company attributes vs product attributes, and maybe that's because most survey work is done by brand teams in marketing. Nothing wrong with that, but they don't always help you get to the heart of whether your product is hitting the mark.
My favorite question to insert in a customer survey is taken from Rahul Vohra, the founder of Rapportive and now Superhuman. It goes as follows:
"How would you feel if you could no longer use Superhuman/[insert your product here]?"
Answers:
a. I’d be very disappointed
b. Somewhat disappointed
c. Not dissapointed.
Measure the % of very disappointed. If more than 40% of your responses are "very disappointed" you have Product Market Fit
Inbound:
This is really about getting your hands on the right signals and data that are already available to your and your product teams. So when you're getting up to speed on a new product or initiative, of if you've just started a new role, ask for the results, outputs from existing project work, whether it's user surveys, customer interviews or pricing studies. Nothing will get you up to speed faster than plowing through those existing benchmarks. They can also help you build a quick baseline.
Another great resource is to speak to colleagues within each department your product touches, and ask them about what they're hearing about the product and its perception. Obtaining that internal feedback is great, because you get the good, bad and ugly very quickly.
Of course, there's no substitute to speaking with customers, but that's not always a luxury that's at your fingertips. The one hack that I've gotten to more and more over the years is searching through Gong sales discovery calls around specific keywords and getting verbatims. Saves you tons of time, and you're hearing from an objective source.
Finally, you touched on NPS. I think that's still a very powerful metric to use, but like many, it can be manipulated/misinterpreted. You have to be very clear on what a suitable baseline is, as well as what the bar should be, and be able to explain to stakeholders, including executives, why an NPS is not relative, and that anything short of a 8 or a 9 is a net detracting result.
I used to lead a Customer Insights team, so we have used all types of metrics for user feedback. There are NPS surveys, in product happiness scores, customer satisfaction, etc. I would recommend aligning with Product and other departmental functions on how the user feedback is utilized, where are you collecting it, and when are you collecting it. This is because I worked very closely with UX Research teams and studies have shown user feedback based on where (ie in product vs random request out of renewal conversation cycle) and when (ie after completing a task vs struggling with an action) will move the most needles.
I love using NPS! I'm also a big fan of getting product feedback in the form of star rating, thumbs up/down, etc.
Users have little patience for long surveys, so it's our job to be really strategic with WHEN we ask and HOW we ask (p.s. keep it simple!).
IMO, NPS should be owned by whoever is responsible for retention optimization. Usually this falls into growth or product. Otherwise, you'll have division between NPS collection ownership and execution.
Then, you can utilize data from NPS responses and cross reference with usage data and churn indicators to see who you should reach out to as a follow-up.