What are some of the *worst* KPIs for Product Managers to commit to achieving?
This could really range based upon the company, your users, your target goals, where you are in your business lifecyle, etc.
The most basic ones are: acquisition, activation, retention, revenue, referral
You could also be measuring customer lifetime value
In regards to the worst KPIs, honestly those that cannot be discretely measured and tracked over a specific time period. Vanity metrics (e.g. the number of views from a marketing article or number of shares of a post) really add no value.
The worst KPIs to commit to are the ones you can’t commit to at all. We can set targets and metrics and make dashboards, but that’s exactly what they are - targets. I recommend looking at past performance and trends within the data and setting a realistic yet aspirational target to work towards. After that, begin iterating on your target. Revisit the KPI, analyze, adjust, and communicate your findings.
KPIs around delight unless this is your key product differentiator (which is proven to be compelling to customers). Focus on building an intuitive and effective product experience that users would want to recommend to their friends/colleagues. Focusing on the final pieces of polish such as interactions, delight, animations, etc are fluff until you're really providing value to your customers. This is why keeping your KPI or success metrics concise and essential will allow you to provide the most impact to customers.
- Rates: To me without absolute numbers, rates may paint a false picture. Let me explain with an example. Lets say you have a trial experience for your product and you are responsible for the cart experience and thereby conversion rates which is measured by number of paid customers/number of trialers. I would suggest that instead of rates the north star metrics should be a combination of number of paid customers as well as Average Deal Size (ADS) per paid customer. A conversion rate is a good number to track but may lead to wrong hypothesis when you see not normal trends in either your numerator or demonimator. In this example, conversion rate will go down if the number of trials significantly increases or decreases. If you are responsible for rates make sure that you own both the denominator and the numerator of that equation in order to truly able to influence a positive change
- Fuzzy metrics: As the say "What cant be measured, cant be managed". Metrics that are not explicit like Customer Happiness and NPS are not possible to measure. I'd go a step further to challenge why should one even measure those metrics and are there any core metrics that can be measure successfully. An example would be instead of measuring Customer Happiness, lets measure Customer Expansion because happy customers expand.
Some of the worst KPI's in my opinion are:
- KPI's that cannot be measured correctly
- KPI's that do not give a sense for the goal you are tracking. You can use the AARRR (Acquisition, Activation, Retention, Revenue, Referral) framework to understand the best metrics you can choose to align with your outcome/goal.
- KPI's that are not achievable in a desired timeframe. Yes there could be exceptions here but generally these are not the best ones in my opinion.
- Any KPI's that do not really tell you the health of the business unless a holistic picture is presented. e.g. number of app installs is not that meaningful without retention and engagement metrics
The worst KPIs are vanity metrics that have no ties back to actual adoption or business metrics. I once had a product manager commit to hitting a number of emails a notification system was supposed to send in a 30-day period. Without context, this seems like a great metric to track for volume, except the total count of emails tells you nothing about how many people are getting value, if they are getting value, if they are recurring users, or if the emails are contributing to user satisfaction. This can certainly be a metric in the toolkit, but not a KPI for a product line.
Let's cover this in two ways: (1) how to think about KPIs, (2) examples of poor ones and how they can be better. I'll also approach the question a little more broadly than Product Managers alone.
Remember that Key Performance Indicators (KPIs) are used at all levels of a company (e.g. project, team, group, division, exec team) with different levels of fidelity and lag (e.g. daily active user vs. quarterly revenue). The appropriateness of standard KPIs will also differ by industry (e.g. commerce software will not rely on daily active users the way social networks do). Finally, many people use the term KPI when they actually just mean metrics (whether input, output, health, or otherwise). As the name suggests, only the metrics that are key to success should be elevated to KPIs, and there should be as few of them as possible. When I see more than 1 from a team, 3 from a group, or 5 from a division/exec team, there are good odds that some can be cut, amalgamated, or otherwise improved. KPIs are, after all, meant to drive decision making and accountability.
So what are the criteria of KPIs that stand to be improved, and examples of them?
- Vanity metrics: these look impressive but doesn't actually measure the success of a product. Examples include the amount of traffic to a website, the number of sign-ups a product has, daily active users for marketplaces that monetize through purchases, or the number of likes across posts on a social network.
- Poorly instrumented metrics: these are not reliably measured, which can lead to incorrect or misleading conclusions about the effectiveness of a product. For example, if the first step of a conversion funnel (e.g. checkout) has many ingress pathways, and the user can transition in and out of that step before proceeding down funnel, how well your instrumentation deduplicates that first step is critical to your conversion calculations.
- Lack of attribution to effort: any metric who's fluctuations cannot be explained by the combination of efforts from the team/group using it as a KPI, plus seasonal and random variability, is going to be ineffective. For example, if a common funnel in the company has multiple teams trying to improve its conversion, each team needs to define a KPI that does not overlap the others or they won't know if their efforts resulted in an outcome versus another team's efforts. Note that if all those teams are in the same group (e.g. a growth org), then that group could effectively use the conversion rate as their KPI. When in doubt, or if you're unable to isolate your efforts with lower level metrics, run an A/B test against every major change by each team to get a better (but imperfect) indication of relative contribution. This criteria covers many grey areas as well. Revenue is a prototypically difficult KPI for individual teams to use because of attribution. However, you can find relatively small teams or groups that build add-on products that are directly monetized and expansion revenue can be an excellent KPI for them (e.g. a payroll add-on to an accounting system).
- Unclear tie to next level's KPI: companies are concentric circles of strategy, with each division, group, and team needing to fit its plans and focus into that of the prior. This includes KPIs, where you'd expect a well modeled connection between lower level KPIs driving higher level ones. For example, say a SaaS invoicing platforms sets an X in Y goal as an activiation hurdle to predict long term retained users (i.e. 2 invoices sent in first 30 days). It would be reasonable to assume that onboarding will heavily influence this. But what about onboarding, specifically, will matter? If a team concots a metric around how many settings are altered in the first 7 days (e.g. chose a template, added a logo, set automatic payment reminders) and wants to use that as their KPI, they'd need to have analyzed and modeled whether that matters at all to new users sending their first 2 invoices.
- Lagging metrics at low levels: the closer you get down to a team level, the more you want to see KPIs defined by metrics that are leading indicators of success and can be measured without long time delays. Bad KPIs are ones that just can't be measured fast enough for a team to learn and take action. For example, many teams will work to increase retention in a company. But larger customers in SaaS may be on annual contracts. If features are being built to influence retention, it's better to find leading activity and usage metrics at the team level to drive behaviour and measure them weekly or monthly. These can tie into a higher level retention KPI for a group or division, and keep teams from getting nasty delayed surprises if their efforts weren't destined to be fruitful. The only caveat for this criteria is how platform and infrastructure teams measure themselves. Their KPIs are typically more lagging and this topic is deserving of its own write-up.
- Compound or aggregate metrics: these are made up of multiple individual metrics that are combined using a formula in order to provide a more comprehensive view of the success of a product without needing to analyze many individual numbers. Examples include effectiveness scores, likelihood indicators, and satisfaction measures. Arguably, many high level KPIs behave this way, such as revenue and active users, which is why (3) above is important to keep in mind. However, its formulas that are particularly worrisome. They inject bias through how they're defined, which is hard for stakeholders to remember over time. You find yourself looking at a score that's gone down 5% QoQ and asking a series of questions to understand why. Then you realize it would have been simpler to look at individual metrics to begin with. In my experience, these KPIs lead to more harm than good.
- Lacking health metrics or tripwires: having a KPI is important, but having it in isolation is dangerous. It's rare for lower level metrics to be improved without the possibility of doing harm elsewhere. For example, in commerce, we can make UX changes that increase the likelihood of conversion but decrease average order value or propensity for repeat purchases. Therefore, a KPI that does not consider tripwires or does not get paired with health metrics is waving a caution flag.
Maybe this is controversial, but I believe KPIs should be hypothesis-oriented with an eye towards learning what is and is not working to move the needle on an overarching business objective rather that just a random number we think might mean...something?
Let me give an example here. Say we are launching an update that we believe will increase adoption of a feature most closely understood to drive retention (ie customers that adopt this tool tend to stay with us longer and spend more money).
I would not want to make my KPI 80% of customers use the new update. Why? Because I have no idea if that number is attainable. That number also does not indicate if we are moving the needle on adoption of the feature this update is meant to serve.
Rather, I would make my KPI hypothesis driven and track the launch of the update against adoption of the existing the feature. If we see adoption increasing, we proved our hypothesis correct. It doesn't necessarily matter early on by how much adoption is increasing, just that our hypothesis seems to be on the right track. From there we have lots of options:
We can look to iterate on the update based on customer feedback
We can call more attention to it via in-app messaging
We can better incorporate it into the onboarding flow
We can work with PMM to call more attention to it in our docs and marketing announcements
Being hypothesis-oriented in KPI measurement ensures you are constantly learning from your product launches vs setting a number, achieving it and then moving on to the next thing.
There can be 2 sets of KPIs that can be "worst" metrics.
The set that we cannot commit to.
-
The set that we can commit , track tasks completed but dont measure outcomes.
We need KPIs that cover the whole ground - Product and how customers are receiving the product and Financial goals of the Organisation.
A product manager should prioritize addressing customer problems to create long-term value and profitability for the business. , there are several ineffective KPIs that we often fall prey to:
Output-Focused KPIs
Deliver X Number of Features: Emphasizing quantity over quality can dilute the value provided to users.
Create Y PRDs, Processes, or Briefs: Focusing solely on documentation can lead to a lack of actionable outcomes.
Tactical KPIs
Cost-Cutting Metrics: Overemphasizing reductions in costs can hinder innovation and growth.
Short-Term Wins: Setting goals like "Do X and win this quarter's business" may prioritize immediate results at the expense of long-term strategy.
By shifting focus away from these metrics, product managers can better align their efforts with meaningful outcomes that drive sustainable success.
Since the question suggests “achievements”, I assume we want to reason about Key Results and not KPIs. See question: “What are good OKRs for product management?” for a general introduction to the topic.
Good Key Results are often hard to come by, and we often fall into the trap of bad Key Results. Here are the common trouble maker patterns.
The binary metric: Deliver [feature] by [date].
This metric is useless to understand progress since it often is 0% for the entirety of the execution, and becomes 100% usually at the last minute towards the target date. You can workaround the problem by finding a relatively predictable breakdown of the feature in phases or sub-deliveries of ideally equal sizes. Then each phase completed can be represented as a percentage update towards the completion.
Opinion-based metric: Reach market-fit by [date].
This usually will look like a percentage that depends on a leader’s best guess of the progress towards a perceived state. Very bad situation since there is no way to realistically challenge that perception. This defeats the purpose of a Key Result in its role to raise a flag when execution goes off the rail. It also disempowers teams and can lead to dramatic disengagement or misalignment. A workaround is to find a proxy metric that indirectly represents progress towards the target. In the example above it could be an NPS (Net Promoter Score) score or a measurable customer segment adoption progress.
Vanity metric: Reach [X] monthly active users by [date].
A vanity metric is usually a number that visibly goes up or down but does not correlate to meaningful business outcomes. This is often what happens when KPI (lagging metrics) are confused with KR (leading metric). The example above might be correct if the turn-around time during an execution period is so short we can immediately observe a market result at each reporting milestones (think a marketing campaign with immediate observable results). But often what would happen is that the KPI we look at represents results from actions taken several execution periods in the past (think delivering a feature in Q1 that impacts numbers in Q3). In that situation, the KR does not help track progress, might not even be directly correlate-able to specific actions, and the execution team might already be committed to new goals by the time you observe a need for corrective action. There is no workaround, such metric is not a useful Key Result. This said it could instead by useful as an input to strategic planning (macro market trends and intelligence).
The (nearly) impossible to update number: Customers save [amount] by using [product feature].
Of course, this example might work in your case, it’s hard to find an absolute example that would work for all product context. But by choosing this example I wanted to point at the usual challenge of choosing a data point we cannot observe directly, and for which the collection of the data requires a third party contribution, or a significant effort like deep market research. In that situation we often observe that we get an update very early on as we’re committed to the OKR, and progressively as time go on, we encounter many good and bad excuses to miss to update the OKR. At that point, leadership and stakeholders will lose interest in the OKRs update and will request new data points that will force you to abandon the initial KR you wanted to track.
Too much OKRs: "We're so data-driven we have more Key Results than employees!"
One last important rule to remember: keep the number of OKRs under control! Too much data points will muddy the reporting, and waste data collection resources without significantly improve execution. OKRs should be chosen by how meaningful they are to represent progress towards the business outcomes you have described in your approved business case. A good rule of thumbs is 3 Objectives max for an organization, 2-3 Key Results per objective. It is ok and useful to create cascading OKRs depending on the size of the organization.
See question: “How do you approach setting crisp KPIs and targets for Engine features and linking them to your topline metrics?” for my step by step process for realistic OKRs.
I might contradict some of the PMs out there when I say this but here we go and Ill prove it:
1. The numbers of released features
I think this is a bad one to track because it encourages quantity over quality whereby PMs would ship features that add little or no real value to users.
2. DAU with no context
DAUs can be misleading if they aren't tied to meaningful engagement or conversion. A product might see a spike in users from a promotional campaign, but if those users don't convert or stick around, what’s the point? It's better to focus on metrics that reflect the AARRR for the most part.
3. Time spent on app
Youre probably saying (dude, really?!, yes! really)
This often sounds good in theory but can be disastrous if your product is designed to simplify or streamline a task.
Optimizing for more time in-app contradicts user value in many cases. For example, think of tools like Slack or Trello, users want efficiency, not prolonged interaction. I hope that delivers the point.
4. reducing churn without knowing the 'why'.
Many focus on reducing it (as they should) but without analyzing user behavior or feedback.
For example, you might be able to reduce churn temporarily through heavy discounts or aggressive tactics, etc, but without solving core product issues, you're just delaying the inevitable.
5. Meeting deadlines.
Let me explain.
Hitting a timeline sounds great but at what cost? Forcing releases to meet arbitrary deadlines often leads to cutting corners in my opinion since it may affect product quality or usability.
In a nutshell, getting the right product out is far crucial than getting the product out fast.
Hope that helps :)