What key metrics do you track to measure post-launch success beyond just vanity numbers like downloads or sign-ups?
There are two major ones I’ve used over the years.
The first is pipeline generation--how much pipeline is generated by the campaigns or sales plays that are tied to the launch
The second is use case adoption. There are different tools and ways to measure it but ultimately it’s usually a CRM report you can pull and analyze which use cases are being adopted, and what are customers using your product for. If it’s tied to the key use cases you promoted during the launch, and even better if there’s historical data where you can compare and ideally see a lift, that means the launch is an impactful one.
I also want to share my POV about the so-called vanity metrics. I think there’s a lot of value in them because those metrics are very important early signals of whether your messaging is resonating. So instead of waiting for the pipeline number and use case adoption metrics which usually take 1-2 quarters at least, I would encourage PMMs to regularly check metrics like downloads and registrations. Initially post-launch, I recommend a monthly review or something even more frequent. Once things settle a bit, it can be a quarterly review.
Orient yourself and every other stakeholder to usage and adoption goals. Adoption takes time. When revenue targets are involved, adoption rates can and should serve as a leading indicator of success. How many or what percent of customers have viewed the feature? Actually meaningfully engaged with it? Have used it more than one time? How you set the adoption goal will be unique to your product and objectives.
I'd also encourage you set these goals on a customer segment level vs the entire customer base. For example, if you're giving the feature away to Enterprise customers but charging an add-on fee for customers on your entry-level product, you should set the adoption goal for that latter group lower than the former.
I'd also strongly encourage you engage customers who do and don't adopt the new feature to learn why / why not, then improve. For example, ask users who viewed the feature once but haven't revisited why that was the case. Was the value prop not compelling? Was it hard to understand? Did they just forget about it? Use that information to optimize your communications and even the feature itself.