Question Page

What are some of the *worst* KPIs for Product Managers to commit to achieving?

9 Answers
Virgilia Kaur Pruthi (she/her)
Virgilia Kaur Pruthi (she/her)
Expedia Group Senior Director of Product, Head of Trust and SafetyJanuary 31

This could really range based upon the company, your users, your target goals, where you are in your business lifecyle, etc.

The most basic ones are: acquisition, activation, retention, revenue, referral

You could also be measuring customer lifetime value

In regards to the worst KPIs, honestly those that cannot be discretely measured and tracked over a specific time period. Vanity metrics (e.g. the number of views from a marketing article or number of shares of a post) really add no value.

995 Views
Tasha Alfano
Tasha Alfano
Twilio Staff Product Manager, SDKs and LibrariesFebruary 10

The worst KPIs to commit to are the ones you can’t commit to at all. We can set targets and metrics and make dashboards, but that’s exactly what they are - targets. I recommend looking at past performance and trends within the data and setting a realistic yet aspirational target to work towards. After that, begin iterating on your target. Revisit the KPI, analyze, adjust, and communicate your findings.

487 Views
Nico Rattazzi
Nico Rattazzi
DOZR VP of ProductsFebruary 22

KPIs around delight unless this is your key product differentiator (which is proven to be compelling to customers). Focus on building an intuitive and effective product experience that users would want to recommend to their friends/colleagues. Focusing on the final pieces of polish such as interactions, delight, animations, etc are fluff until you're really providing value to your customers. This is why keeping your KPI or success metrics concise and essential will allow you to provide the most impact to customers.

396 Views
Farheen Noorie
Farheen Noorie
Zendesk Senior Director of Product ManagementApril 20
  1. Rates: To me without absolute numbers, rates may paint a false picture. Let me explain with an example. Lets say you have a trial experience for your product and you are responsible for the cart experience and thereby conversion rates which is measured by number of paid customers/number of trialers. I would suggest that instead of rates the north star metrics should be a combination of number of paid customers as well as Average Deal Size (ADS) per paid customer. A conversion rate is a good number to track but may lead to wrong hypothesis when you see not normal trends in either your numerator or demonimator. In this example, conversion rate will go down if the number of trials significantly increases or decreases. If you are responsible for rates make sure that you own both the denominator and the numerator of that equation in order to truly able to influence a positive change
  2. Fuzzy metrics: As the say "What cant be measured, cant be managed". Metrics that are not explicit like Customer Happiness and NPS are not possible to measure. I'd go a step further to challenge why should one even measure those metrics and are there any core metrics that can be measure successfully. An example would be instead of measuring Customer Happiness, lets measure Customer Expansion because happy customers expand.  
513 Views
Paresh Vakhariya
Paresh Vakhariya
Atlassian Director of Product Management (Confluence)May 10

Some of the worst KPI's in my opinion are:

  • KPI's that cannot be measured correctly
  • KPI's that do not give a sense for the goal you are tracking. You can use the AARRR (Acquisition, Activation, Retention, Revenue, Referral) framework to understand the best metrics you can choose to align with your outcome/goal.
  • KPI's that are not achievable in a desired timeframe. Yes there could be exceptions here but generally these are not the best ones in my opinion.
  • Any KPI's that do not really tell you the health of the business unless a holistic picture is presented. e.g. number of app installs is not that meaningful without retention and engagement metrics
950 Views
Jacqueline Porter
Jacqueline Porter
GitLab Director of Product ManagementJuly 26

The worst KPIs are vanity metrics that have no ties back to actual adoption or business metrics. I once had a product manager commit to hitting a number of emails a notification system was supposed to send in a 30-day period. Without context, this seems like a great metric to track for volume, except the total count of emails tells you nothing about how many people are getting value, if they are getting value, if they are recurring users, or if the emails are contributing to user satisfaction. This can certainly be a metric in the toolkit, but not a KPI for a product line. 

304 Views
Mani Fazeli
Mani Fazeli
Shopify Director of ProductDecember 14

Let's cover this in two ways: (1) how to think about KPIs, (2) examples of poor ones and how they can be better. I'll also approach the question a little more broadly than Product Managers alone.

Remember that Key Performance Indicators (KPIs) are used at all levels of a company (e.g. project, team, group, division, exec team) with different levels of fidelity and lag (e.g. daily active user vs. quarterly revenue). The appropriateness of standard KPIs will also differ by industry (e.g. commerce software will not rely on daily active users the way social networks do). Finally, many people use the term KPI when they actually just mean metrics (whether input, output, health, or otherwise). As the name suggests, only the metrics that are key to success should be elevated to KPIs, and there should be as few of them as possible. When I see more than 1 from a team, 3 from a group, or 5 from a division/exec team, there are good odds that some can be cut, amalgamated, or otherwise improved. KPIs are, after all, meant to drive decision making and accountability.

So what are the criteria of KPIs that stand to be improved, and examples of them?

  1.  Vanity metrics: these look impressive but doesn't actually measure the success of a product. Examples include the amount of traffic to a website, the number of sign-ups a product has, daily active users for marketplaces that monetize through purchases, or the number of likes across posts on a social network.
  2.  Poorly instrumented metrics: these are not reliably measured, which can lead to incorrect or misleading conclusions about the effectiveness of a product. For example, if the first step of a conversion funnel (e.g. checkout) has many ingress pathways, and the user can transition in and out of that step before proceeding down funnel, how well your instrumentation deduplicates that first step is critical to your conversion calculations.
  3.  Lack of attribution to effort: any metric who's fluctuations cannot be explained by the combination of efforts from the team/group using it as a KPI, plus seasonal and random variability, is going to be ineffective. For example, if a common funnel in the company has multiple teams trying to improve its conversion, each team needs to define a KPI that does not overlap the others or they won't know if their efforts resulted in an outcome versus another team's efforts. Note that if all those teams are in the same group (e.g. a growth org), then that group could effectively use the conversion rate as their KPI. When in doubt, or if you're unable to isolate your efforts with lower level metrics, run an A/B test against every major change by each team to get a better (but imperfect) indication of relative contribution. This criteria covers many grey areas as well. Revenue is a prototypically difficult KPI for individual teams to use because of attribution. However, you can find relatively small teams or groups that build add-on products that are directly monetized and expansion revenue can be an excellent KPI for them (e.g. a payroll add-on to an accounting system).
  4.  Unclear tie to next level's KPI: companies are concentric circles of strategy, with each division, group, and team needing to fit its plans and focus into that of the prior. This includes KPIs, where you'd expect a well modeled connection between lower level KPIs driving higher level ones. For example, say a SaaS invoicing platforms sets an X in Y goal as an activiation hurdle to predict long term retained users (i.e. 2 invoices sent in first 30 days). It would be reasonable to assume that onboarding will heavily influence this. But what about onboarding, specifically, will matter? If a team concots a metric around how many settings are altered in the first 7 days (e.g. chose a template, added a logo, set automatic payment reminders) and wants to use that as their KPI, they'd need to have analyzed and modeled whether that matters at all to new users sending their first 2 invoices.
  5.  Lagging metrics at low levels: the closer you get down to a team level, the more you want to see KPIs defined by metrics that are leading indicators of success and can be measured without long time delays. Bad KPIs are ones that just can't be measured fast enough for a team to learn and take action. For example, many teams will work to increase retention in a company. But larger customers in SaaS may be on annual contracts. If features are being built to influence retention, it's better to find leading activity and usage metrics at the team level to drive behaviour and measure them weekly or monthly. These can tie into a higher level retention KPI for a group or division, and keep teams from getting nasty delayed surprises if their efforts weren't destined to be fruitful. The only caveat for this criteria is how platform and infrastructure teams measure themselves. Their KPIs are typically more lagging and this topic is deserving of its own write-up.
  6.  Compound or aggregate metrics: these are made up of multiple individual metrics that are combined using a formula in order to provide a more comprehensive view of the success of a product without needing to analyze many individual numbers. Examples include effectiveness scores, likelihood indicators, and satisfaction measures. Arguably, many high level KPIs behave this way, such as revenue and active users, which is why (3) above is important to keep in mind. However, its formulas that are particularly worrisome. They inject bias through how they're defined, which is hard for stakeholders to remember over time. You find yourself looking at a score that's gone down 5% QoQ and asking a series of questions to understand why. Then you realize it would have been simpler to look at individual metrics to begin with. In my experience, these KPIs lead to more harm than good. 
  7.  Lacking health metrics or tripwires: having a KPI is important, but having it in isolation is dangerous. It's rare for lower level metrics to be improved without the possibility of doing harm elsewhere. For example, in commerce, we can make UX changes that increase the likelihood of conversion but decrease average order value or propensity for repeat purchases. Therefore, a KPI that does not consider tripwires or does not get paired with health metrics is waving a caution flag.
4324 Views
Veronica Hudson
Veronica Hudson
ActiveCampaign Senior Director of Product ManagementNovember 14

Maybe this is controversial, but I believe KPIs should be hypothesis-oriented with an eye towards learning what is and is not working to move the needle on an overarching business objective rather that just a random number we think might mean...something?

Let me give an example here. Say we are launching an update that we believe will increase adoption of a feature most closely understood to drive retention (ie customers that adopt this tool tend to stay with us longer and spend more money).

I would not want to make my KPI 80% of customers use the new update. Why? Because I have no idea if that number is attainable. That number also does not indicate if we are moving the needle on adoption of the feature this update is meant to serve.

Rather, I would make my KPI hypothesis driven and track the launch of the update against adoption of the existing the feature. If we see adoption increasing, we proved our hypothesis correct. It doesn't necessarily matter early on by how much adoption is increasing, just that our hypothesis seems to be on the right track. From there we have lots of options:

  • We can look to iterate on the update based on customer feedback

  • We can call more attention to it via in-app messaging

  • We can better incorporate it into the onboarding flow

  • We can work with PMM to call more attention to it in our docs and marketing announcements

Being hypothesis-oriented in KPI measurement ensures you are constantly learning from your product launches vs setting a number, achieving it and then moving on to the next thing.

352 Views
Sailaja Kalle
Sailaja Kalle
Gainsight Director, Product ManagementJanuary 10

There can be 2 sets of KPIs that can be "worst" metrics.

  1. The set that we cannot commit to.

  2. The set that we can commit , track tasks completed but dont measure outcomes.

We need KPIs that cover the whole ground - Product and how customers are receiving the product and Financial goals of the Organisation.

346 Views
Top Product Management Mentors
Poorvi Shrivastav
Poorvi Shrivastav
Meta Senior Director of Product Management
James Heimbuck
James Heimbuck
Doppler Principal Product Manager
Paresh Vakhariya
Paresh Vakhariya
Atlassian Director of Product Management (Confluence)
Natalia Baryshnikova
Natalia Baryshnikova
Atlassian Head of Product, Enterprise Agility
Zeeshan Qamruddin
Zeeshan Qamruddin
HubSpot Senior Director of Product Management, Flywheel
Clare Hawthorne
Clare Hawthorne
Oscar Health Senior Director, Product Operations
Orit Golowinski
Orit Golowinski
Jit.io VP of Product Management
Anton Kravchenko
Anton Kravchenko
Carta Sr. Director of Product Management
Tom Alterman
Tom Alterman
Notable Head of Product
Rishabh Dave
Rishabh Dave
Stripe Product Lead, Financial Infrastructure