What are different ways to set benchmark goals for launch, maximize demand generated, and continue optimizing post-launch?
It's all about the process. Don't start with "we have something to launch and here's all the stuff we're going to do for this launch and NOW let's set a launch goal." Instead, start with what strategic goals your company is trying to solve. Okay, it's churn. So how will this new launch help with churn? Then come up with your plan based on that goal.
Sometimes it's not easy to set a goal using the numbers at your disposal. Meaning you can't come up with "I need to book 50 meetings for CS with this launch" by running a Salesforce Report. Sometimes it's a guestimate.
"Okay, I need to help the company reduce churn. This new feature is only available for customers on the Enterprise plan. I know if they talk to their CS team after they hear about this launch they are less likely to churn. So if I got 50 extra meetings with CS this month, that's 25% bump on meetings, that should put a dent in churn this month"
You can use internal and/or external benchmarks.
How did prior launches perform at your company? Are you choosing the right metrics? Use that as the benchmark to outperform by X%.
How are launches for similar products/features measured at a peer company of yours? What do their performance look like (assuming you have a network or a friendly willing to share). Use that as a benchmark. Do your investors have other portfolio companies willing to share?
This depends on where you are in the launch cycle. I'm coming from a Software-as-a-Service (SaaS) context, which generally includes the following phases for launches: alpha, beta, progressive roll-out, and Generally Available (GA).
During the alpha phase, we look at metrics like requests to participate in the alpha, feedback during the alpha phase, and acceptance of feedback sessions (written requests, meetings to talk about the product, etc.).
During the beta phase, we look at metrics like waitlist signups, lighthouse customers (large customers who are willing to give us detailed feedback about pain-points and scaling), and adoption (ie: spreading across teams in beta accounts).
For progressive roll-out, we'd look at product adoption metrics, like Monthly Active Users (MAU), account growth, time-in-product, and actions taken in product to see that users are seeing fast time-to-value.
In the GA phase, we'd look at more traditional marketing metrics, like sign-ups or free trials, conversion to paid tiers, account growth, and MAU.
We would also look at top-of-funnel metrics like press coverage, social media impressions and engagement, watch time on demo videos, referral traffic from various promotion channels, conversion from SEO articles, and the overall funnel health of conversion from traffic to product tour to sign-up/free trial.
There's a mix of metrics that indicate a healthy go-to-market strategy (ie: creating and capturing the demand) and the adoption/upgrade strategy (ie: onboarding, fast time-to-value, stickiness + virality within an organization).
Continuing to publish demos, guides, and use cases helps keep SEO strong and also helps new and existing customers unlock the value in the product.
It’s a great question, and I answered this in part above. To elaborate on setting benchmarks, you’ll want to work with your cross-functional partners (across Product, Finance, Revenue, Marketing, etc.) to determine what historical performance is most relevant to compare success to. It’s much easier to do if your launch is relevant for current customers, because you should have a sense of how they’ve reacted to/adopted products in the past.
It’s much harder to do with a net new market, and you likely will pick a goal based on industry benchmarks or limited data that may be wildly off—either way too high, or so low that you hit it within hours of launch (a very pleasant surprise, but somewhat meaningless). In these cases, you simply need to recalibrate the goal with the data you have. I’d encourage you to dream big and set what you believe is an ambitious goal while overcommunicating the unknowns and caveating that the goal is subject to change if you get data or see performance that indicates that it’s completely off the mark.
The Reforge PMM course I mentioned in the first answer has an incredible module on this.