How do you think about shared KPI’s with your engineering team? And what are ones that product teams often miss?
Product teams in my opinion consist of product, engineering, and design (at a minimum). With that said, product KPIs should always be shared with engineering since what they are building essentially impacts the KPIs of the product in question. All the work that product teams do should always build up back to the overall company/business unit objectives (even if those metrics are more technical).
Product, Engineering (and even design!) should ensure the majority of the user's experience is measured (engagement, conversion), the platform is functional (speed, etc), and that the company's key metrics are preserved.
A big miss that comes up between product and engineering is when there is confusion around a product experience.
Product Perspective: "This is not working as expected. This is a bug"
Engineering: "This is what I was asked to build. It's working as specified"
This will happen from time to time based on how mocks, specifications, or flows are interpreted. The best KPI here is to ensure that all user stories are covered by the experience, the experience is fully tracked (to catch bugs or unintuitive experiences) and that all members of the team test the experience. Shared ownership of the customer experience saves people from the blame game and instead focuses on how to solve for how things should be without judgement.
- Shared KPI's between engineering and Product Management is a great way to build high quality, scalable products that are delivered on-time that bring customer delight. This also builds camaraderie and encourages teamwork towards a common goal.
- There are different ways of achieving this. e.g. PM's and Engineering can own certain KPI's. While both teams can also individually own their own KPI's. Not all have to be shared
- Generally PM's would own Company, Business, Acquisition, User Engagement and User Satisfaction KPI's. Examples are: MRR, Churn rate, Number of users, DAU/WAU/MAU, Number of sessions, Session duration, Churn rate, NPS or CSAT etc.
- While Engineering may own Product Development and Product Quality KPI's. Examples are: On-time delivery, Cost incurred, Team velocity, Defect rate and Support ticket count etc.
- So although they are distinct, some common KPI's can be devised to ensure the customer delivery and quality goals are met. It really depends upon specific needs of the business and outcomes desired. Examples are: On-time delivery, Scalability and Reliability of systems, Support ticket count and User engagement/satisfaction metrics like Sessions, NPS/CSAT.
- The one example of a KPI product teams often miss is: Cohort Retention rate. It is not only important to track overall retention but also retention as it pertains to specific cohorts of customers you are interested in. Not doing this might give a skewed view of your business.
- In terms of KPI's shared between product and engineering, I would say "Effective Resource Utilization" can be missed primarily because it can be hard to track and measure across projects/teams.
Engineering and Product are two sides of a smooth sailing R&D engine. I have seen a number of ways to split the accountability across these groups:
- Say/Do ratios - which include hitting a certain percentage of items that are delivered in an iteration
- Merge Request Rate - which is about throughput and encourages shipping small and fast
- Cycle Time - which is the time it takes from ideation to production, or any other time spent in a certain workflow status
These three are my three favorite metrics for ensuring you are delivering the right things. Some metrics that don't seem to work very well:
- Number of tickets closed in a time period
- Number of items in a release notes/change log
Both of these can result in perverse incentives, where people are adding issues to close them or including irrelevant items in a changelog or release notes to get more credit for delivery.
I'm a huge fan of all success metrics and OKRs (objectives and key results) being shared between the core cross functional working group.
Of course there will always be some that don't match up; I'm thinking about some SLAs, uptime, latency type KPI's that your engineering team tracks. But by taking them as shared and getting your buy in on those you'll much better understand the deployment of the engineering resources and how best to support that team.
All cross functional teams are critical, but the engineering team is the most critical :)
Work with your engineering partners to change the definition of done ‘development complete’ to ‘when customer and/or business goals are met’.
An important ingredient is KPIs that measure first month/first quarter product launch impact. Oftentimes, this will require product and engineering teams to think through leading and lagging KPIs.
A good example of this is enterprise sale cycles could be 5-6+ months; so lagging KPIs like customer logos, enterprise customer adoption and retention, customer case studies will take multiple quarters after a product launch. Use leading indicators like # of sales pitches/product demos, opportunities created to measure product launch short-term impact and create shared goals with your engineering teams.
Bringing engineering into KPI discussions is a must for any Product Org, especially when we are considering key B2B business metrics like churn and retention. I think product managers can often get caught in the trap of only considering which new features are going to drive the business forward and excite new customers, but no business will survive on new customers alone. How are you keeping your current customers happy and satisfied with the product? This is where some important engineering-driven KPIs come in:
Uptime: Understanding uptime and the reliability of your product are a key component to determining if your customers will be satisfied with your product. If you experience frequent outages or incidents, customers will run out of patience and start to look for other solutions.
Load times: How responsive are our pages and how quickly do they load? This is especially relevant in a B2B context where customers are potentially wasting money or losing deals with every extra second they have to wait to perform a task in your platform.
Reliability: While this may sound similar to uptime, reliability needs to be measured in the context of the feature you own with engineering. For example, with an ESP, what % of time do my email sends complete within our SLA?
Defects or Support Escalations: We strive to keep our total number of defects below a certain threshold. Our hypothesis being that fewer defects indicates a more reliable product and likely, more satisfied customers that feel heard when they take the time to let us know something is wrong with the product.