What is the best cadence for gathering stakeholder feedback in preparation for a launch?
That is a hard one to answer because there is no one-size-fits-all. There could be launches where the product rarely evolves and checking in with stakeholders once in a few months might suffice, at other times it could be daily or even multiple times a day. You would also want to increase the check-ins as you get closer to milestones (e.g., finishing user experience, go-to-market, etc).
For a lot of products, a milestone based checkin works best. A weekly status update can supplement it.
If you believe that your partners are there to help you build the best experiences/products possible, then you should have continuous feedback loops with them not just at the start and finish. The most effective teams include their internal partners in their sprint cadences which look something like this:
- You are involving them early in the discovery process to see the opportunities you are exploring and the outcomes you are trying to achieve. This is how you can make sure you are aligned from the beginning and get everyone committed to the path forward.
- You have a continuous cadence and are inviting them to sprint reviews and/or making recordings available so you can prevent surprises about how the solution is evolving based on what you are learning. This process here is probably most critical because if your internal partners are there for the whole journey, then launching a product is that much easier as 1) they have context and details of what is being built and 2) they can start their work in parallel and iterate with you along the journey vs. making it a "handoff" to them at the end.
- You are creating a culture of mutual accountability by creating space for them to share how their work is progressing alongside yours. The norm seems to be that the product team needs to be transparent and show their work and ask for feedback, but is this reciprocated by your internal partners? I don't think this all fits into a sprint demo, but I have used GTM demos as a place for partners like sales, operations, product marketing, etc. to show the progress they are making and for the product team to give feedback. Having these conversations and feedback loops is a way to ensure that the whole experience works well and has high quality.
- Lastly, consider inviting your internal partners to your retrospectives or holding a separate one.
Depends on the stakeholders' seniority and role in the initiative, but here's a few guidelines that I use, which I outline below. However, one important GitLab cultural practice that is very helpful is that for every major initiative we have internally public channels where anyone interested in the initiative can join (the only exception are projects that must remain confidential).
That way, as we make progress on the initiative anyone interested (all stakeholders and curious individuals) can join the channel and see the progress that is being made, the challenges, and can chime in with feedback at any point. This is huge for collaboration.
Outside of that, here are some specific things that I like to think about:
Executive sponsor: Early on, I communicate with them until we are both aligned and agree on the goal of the launch and the plan to achieve it. After that's done, I update them on any major changes around launch dates, feature scope, and depending on their interest in details, changes to the user experience. I don't think of these as "gathering feedback" as much as providing updates. If I run into any launch blockers, I'd immediately let them know.
Functions not involved in development but with good insight into users/customers (Sales, Marketing, Support, etc.): I think of them as needing to be involved heavily in certain situations -- solution validation and testing and GTM approach. If you did your job right upfront, you've already been in touch with them and they know that you're developing a solution that you'll be launching, and what the scope of the solution is (e.g., what it will do and achieve). As you get closer to launch, you want to strategically choose moments in which you can validate that what you're building will be good for users/customers once launched. To do this, give demos or request testing/solution validation as the team builds -- when you have enough built that they can appreciate how things will work and provide feedback, but not too late that you can't change anything prior to the launch. For example, for a 3-month project, you'd probably want their input every other week from ~6 weeks in until the end of the project.
Other impacted functions that may need to plan around the launch (e.g., Finance updating financial models). For anyone else of interest, I'd make sure that they are aware of what's happening at least the month ahead of the launch and again the week of the launch. For this I focus more on announcing in a public Slack channel focused on that launch when things will be happening, and make sure that those interested folks are looped into that channel. The idea is that they know and can plan accordingly, and if they need to provide feedback they can.
TLDR: get as much feedback as early as you can from your customers to validate the use case you're solving and your approach to solving it, validate the solution with product leadership, as well as with sales and marketing in terms of messaging, make sure the product can be built to the requirements customers need, and ensure all go-to-market functions sign off on the launch plan.
There are two big influences on the cadence of gathering feedback:
Which stakeholders - who are you referring to as a "stakeholder"? I assume we mean customers, sales (technical and non-technical), engineering, marketing, customer support/success, and pricing.
Stage of launch prep - how far out are you from launch?
This question could get really long - so I'm going to try to make it rather concise:
Business case development - conduct UX research, or your own customer interviews to discover pain points about a use case. Synthesize the findings and use them to put together a PRFAQ or PRD and socialize that with product peers and leaders and engineering counterparts. Also with product marketing to ensure the "PR" part of the PRFAQ makes sense. Once these internal stakeholders align on the
Establish a product preview program with customers that have the use case requirements that drive your approach on a biweekly or monthly basis as you make progress on solution development. Early on, show Figma mock ups if you don't yet have code. Run the preview as long as you need until the solution is ready. You may need to add additional customers over time as requirements may evolve or be added once customer see the solution for themselves. I would focus on nailing the activation and onboarding elements of the product first, and ensuring one critical use case can be completed by the end of the preview program.
In parallel to #2, run a series of sales feedback sessions on a monthly or quarterly basis (depending on how long the solution preview is) to get an understanding on how to go to market, learn what pitfalls to avoid, how to structure pricing, metering, etc. This feedback will probably inform some smaller elements of the product itself but it will greatly influence field enablement and messaging.
Have checkpoints throughout the preview process with product marketing to nail the messaging. How often depends on how embedded product marketing is in the product development process. PMM should really be in those customer and sales feedback sessions. PMM should create a messaging brief based on customer feedback. The messaging brief should be complete before the end of the preview. All marketing content and sales enablement should be based off of this messaging brief.
Having a bill of materials for launches that goes through the required checklist of everything that's required before launch is really important - it should define when certain functions need to provide input or sign off.
Launch the product!
-
Continue getting feedback in a monthly or quarterly customer advisory board - never stop getting feedback to iterate and improve on existing functionality or expand into new use cases.