Quick search for:
What are some of the *worst* KPIs for Product Managers to commit to achieving?
Since the question suggests “achievements”, I assume we want to reason about Key Results and not KPIs. See question: “What are good OKRs for product management?” for a general introduction to the topic.
Good Key Results are often hard to come by, and we often fall into the trap of bad Key Results. Here are the common trouble maker patterns.
The binary metric: Deliver [feature] by [date].
This metric is useless to understand progress since it often is 0% for the entirety of the execution, and becomes 100% usually at the last minute towards the target date. You can workaround the problem by finding a relatively predictable breakdown of the feature in phases or sub-deliveries of ideally equal sizes. Then each phase completed can be represented as a percentage update towards the completion.
Opinion-based metric: Reach market-fit by [date].
This usually will look like a percentage that depends on a leader’s best guess of the progress towards a perceived state. Very bad situation since there is no way to realistically challenge that perception. This defeats the purpose of a Key Result in its role to raise a flag when execution goes off the rail. It also disempowers teams and can lead to dramatic disengagement or misalignment. A workaround is to find a proxy metric that indirectly represents progress towards the target. In the example above it could be an NPS (Net Promoter Score) score or a measurable customer segment adoption progress.
Vanity metric: Reach [X] monthly active users by [date].
A vanity metric is usually a number that visibly goes up or down but does not correlate to meaningful business outcomes. This is often what happens when KPI (lagging metrics) are confused with KR (leading metric). The example above might be correct if the turn-around time during an execution period is so short we can immediately observe a market result at each reporting milestones (think a marketing campaign with immediate observable results). But often what would happen is that the KPI we look at represents results from actions taken several execution periods in the past (think delivering a feature in Q1 that impacts numbers in Q3). In that situation, the KR does not help track progress, might not even be directly correlate-able to specific actions, and the execution team might already be committed to new goals by the time you observe a need for corrective action. There is no workaround, such metric is not a useful Key Result. This said it could instead by useful as an input to strategic planning (macro market trends and intelligence).
The (nearly) impossible to update number: Customers save [amount] by using [product feature].
Of course, this example might work in your case, it’s hard to find an absolute example that would work for all product context. But by choosing this example I wanted to point at the usual challenge of choosing a data point we cannot observe directly, and for which the collection of the data requires a third party contribution, or a significant effort like deep market research. In that situation we often observe that we get an update very early on as we’re committed to the OKR, and progressively as time go on, we encounter many good and bad excuses to miss to update the OKR. At that point, leadership and stakeholders will lose interest in the OKRs update and will request new data points that will force you to abandon the initial KR you wanted to track.
Too much OKRs: "We're so data-driven we have more Key Results than employees!"
One last important rule to remember: keep the number of OKRs under control! Too much data points will muddy the reporting, and waste data collection resources without significantly improve execution. OKRs should be chosen by how meaningful they are to represent progress towards the business outcomes you have described in your approved business case. A good rule of thumbs is 3 Objectives max for an organization, 2-3 Key Results per objective. It is ok and useful to create cascading OKRs depending on the size of the organization.
See question: “How do you approach setting crisp KPIs and targets for Engine features and linking them to your topline metrics?” for my step by step process for realistic OKRs.