AMA: Triple Whale 🐳 Director of Product Management, Kellet Atkinson on Product Management KPI's
November 19 @ 10:00AM PST
View AMA Answers
Triple Whale 🐳 Director of Product Management • November 19
I'm not sure there's a good, one-size-fits-all answer for this question. OKRs (objectives and key results) are meant to drive behavior change and create focus for your team. I think most teams that use OKRs intend to use them in this way, but fall into a few common traps: * Using OKRs as a management tool to drive outputs instead of outcomes * If you're like me, at some point you might have been assigned an OKR without measurable "key results" * Creating too many OKRs, which completely undercuts the purpose of OKRs in the first place * Setting OKRs in a vacuum (detached from business strategy) * Sets goals for results that you don't currently have a good way to measure and track * Being under-ambitious or overly ambitious * In the classical framework, OKRs are meant to be stretch goals, but if they are clearly unachievable it undercuts the value of setting OKRs right out of the gate So what makes a good product OKR? Generally speaking, I think there are a few good rules of thumb for creating good OKRs: 1. Create a clear line of sight * Each OKR should be connected directly to the company strategy * It must be measurable (by definition) and time-bound * This is an important call out - its easy to set aspiration goals, but if you can't actually measure the key result it is a bad OKR 2. Focus on outcomes instead of outputs * Bad OKR: "Release feature X" * Good OKR: "Increase feature adoption by X%" 3. Balance multiple horizons / is measurable across multiple dimensions * A good OKR might include results across dimensions like customer impact (NPS, adoption), business value (revenue, market share), and/or product health (performance, quality) 4. Be ambitious, but attainable * An OKR that will be attained without doing anything doesn't change behavior * An OKR with unrealistic results will deflate your team Like I said above, there is no one-size-fits-all solution to good product OKRs, but here are some examples of good OKRs for different product scenarios: Objective: Successfully establish our new analytics product in the mid-market Key Results: * Achieve $500K ARR from mid-market segment * Maintain 85% user retention in first 90 days after launch * Reach average setup time under 30 minutes for new customers Objective: Become the central hub for our customers' workflows Key Results: * Increase the number of active integrations per customer from 2 to 4 * Grow adoption of our API from 5% to 15% * Reduce the average time to integrate new tools from 2 days to 4 hours Objective: Transform our product into a self-serve solution Key Results: * Reduce customer support tickets per user by 50% * Increase users who complete onboarding without assistance from 40% - 80% * Meanwhile, maintain a CSAT score of 8/10
...Read More403 Views
1 request
Triple Whale 🐳 Director of Product Management • November 19
Generally, the most logical approach is to divide up KPIs according to the user journey, and reserve a few top-level KPIs as shared KPIs. For instance, a generic user journey looks something like: Awareness of the feature -> Understanding its value/use-case -> First Use of the feature -> Repeated use -> Advocacy. In this case, Product marketing would own Awareness and Understanding, and Product would own First Use and Repeated Use, and you may both own Advocacy. Breaking that down further, it might look something like this: * Top level KPIs (shared): * Feature adoption rate * Feature-driven revenue * Feature impact on NPS * User-feedback / sentiment * Product Marketing KPIs: * Email campaign engagement rates * Landing page visits * Documentation/guide views * Feature announcement engagement * Product Management KPIs: * First-time feature usage rate * Time to first use * Repeat usage / Feature Retention Rate * Technical Performance Metrics The key here is that there are some very clear boundaries for ownership, but also to consider that you are a team with the same goal. If you are launching a new feature or just trying to drive adoption of an existing feature, its important that you spend time to establish the shared and individual goals to make sure everyone is pushing in the same direction.
...Read More385 Views
1 request
Triple Whale 🐳 Director of Product Management • November 19
If you want to create metrics to hold a product team accountable, there are a few things to keep in mind up front: * If the goal is accountability, the person accountable should feel ownership - so the process should be collaborative with whoever the accountable party will be. * You can't be accountable for things outside of your control. Any metrics you land on should be within the control (or influence) of the PM/team. * There are some things that you should keep at the forefront of your mind when choosing your metrics: * Time scale - Is this measurable in a timeframe that allows for meaningful iteration? * Measurability - Do we have the tools and data to track this reliably? * Signal vs noise - Can we isolate the impact of product changes from other variables? * Leading vs lagging indicators - Do we have early signals that predict long-term success? To figure out the right metrics, I follow this process: 1. Start with a "backwards driver" exercise: company objectives and work backward to identify the key contributing user behaviors that drive those outcomes * Company goal is $100M ARR → Need 10K paying customers → Requires 30% conversion from free tier → Focus on activation metrics like "% of new users who complete core workflow within 7 days" 2. Map the product team's sphere of influence - what contributing user behaviors can they directly impact through product changes? * Team can impact user activation and retention through onboarding improvements, but not initial acquisition costs. So "Cost per Lead" wouldn't be appropriate, but "14-day retention rate" would be. 3. Try your best to define a balanced scorecard across three areas: * Usage metrics (ex. Weekly active users, Feature adoption rate) * Business metrics (ex. Revenue per user, Upgrade rate) * Quality metrics (ex. error rate, bug resolution time) 4. Establish clear measurement windows and review cadence that matches your development cycle * Daily metrics review for quality issues, weekly for usage patterns, monthly for business impact, quarterly for strategic metrics. * If you can, match release cycles - if you ship biweekly, review impact metrics 2-4 weeks after each release. Ideally, the goal is to have 2-3 core metrics that tell you if the product team is moving the needle on user and business value, while maintaining quality. Keep in mind, you may find after one cycle, that the metrics are too hard to move meaningfully in a short cycle, or that the metric is too confounded by external factors outside of product's control. If so, its time to review the metric and redo the exercise.
...Read More396 Views
1 request
How do you define and set SLAs with engineers?
I'm currently struggling to define checkout error rates for our e-commerce platform. We're currently at 1.5%. Personally, I think it's too high. However, I have nothing to substantiate my opinion.
Triple Whale 🐳 Director of Product Management • November 19
This is fundamentally a question about driving technical quality improvements through data-driven decision making. Let me break down the approach... First, let's clarify terminology: What you're looking to establish is an SLO (Service Level Objective), not an SLA. SLOs are internal targets for service quality, which is exactly what we need here. To make the case for a checkout success rate SLO: 1. Ground Your Argument in Data * Start with industry benchmarks: A quick Google search tells me that eCommerce checkout success rates typically range from 98-99.5% * You can do deeper research here and probably find more relevant data for your specific vertical * I wouldn't overly rely on benchmarks - they are aggregate after-all and every business is different, but this gives you a good litmus test for your intuition * Do your best to calculate business impact with some back of napkin math (or get as precise as you can if it helps build your case): * Direct Revenue Loss = (Checkout Attempts × Average Order Value × Error Rate) * Indirect Loss = (Failed Checkouts × % Customer Loss × Lifetime Value) * This gives you a clear dollar value impact per 0.1% improvement 2. Build Engineering Partnership * Before you pick a target and chuck it over the fence, work with engineering to understand the technical problems and constraints * In this case, I might start with error logging and categorization * Break down the current 1.5% error rate by type: * What portion is under your control (e.g., validation errors)? * What's external (e.g., payment processor issues)? * Set targeted SLOs for what you can control * Example: "Reduce validation-related checkout errors from 0.5% to 0.2%" 3. Propose a Phased Implementation Approach that allows the team to tackle it incrementally * Phase 1: Add detailed error tracking * Phase 2: Set baseline SLOs for controllable errors * Phase 3: Implement monitoring and regular review cycles * Phase 4: Iterate targets based on learnings The key is to focus on what you can measure, what you can control, and what delivers clear business value. This transforms the conversation from 'I think 1.5% is too high' to 'Here's the impact of each 0.1% improvement, and here's how we can get there together.'
...Read More361 Views
1 request
Triple Whale 🐳 Director of Product Management • November 19
This is a challenge all organizations face as they scale. Most of my experience is in startups, and I know first-hand how hard it is to transition from instinct to data-driven decisions. It's not something you can just flip a switch on - it takes a very intentional mindset and at least one person to really champion. It sounds like you already have the crucial first ingredient: the desire to be more data-driven. In my experience, there are a few critical steps to take to start to make this transition. They will take time and commitment from you and it's ok not to get each perfect initially: 1. Get leadership alignment on core business objectives. Without clear business objectives, KPIs are just noise. Get alignment on objectives like growth, retention, or revenue. If you can't get direct alignment, make educated guesses based on company context and strategy. As you move into defining KPIs, one trap to avoid is focusing on KPIs that are out of your team's control - there's no faster way to get the team to give up on KPIs. 2. Invest in good data infrastructure. You'll need the right tools to measure what matters: a product analytics platform, accessible database, and either a BI tool or well-structured spreadsheets to start. Without measurement capability, KPIs remain arbitrary numbers. 3. Develop team rituals around KPIs. Build regular touch-points/rituals: weekly metrics reviews, monthly deep-dives, and quarterly goal reviews. These may feel awkward initially - that's normal. Keep iterating until they become valuable. 4. Change decision making processes. Whether you're in leadership or not, incorporate data into feature proposals and planning. Set clear success metrics before new initiatives. Make data an expected part of the conversation. 5. Invest in your team's capabilities. Your team will have varying comfort levels with data. Lead by example, share resources, and celebrate data-driven wins. Conduct team trainings or provide access to learning resources. Build internal documentation to support the journey. Remember, this is a gradual transformation. Focus on steady progress and celebrate the wins when data-driven decisions pay off (build positive reinforcement). Success comes from making metrics a natural part of your team's workflow, not a forced addition.
...Read More383 Views
1 request
Triple Whale 🐳 Director of Product Management • November 19
Every team is a little different, but based on my experience, generally this is how things typically get divided up: Product: * Responsibilities: * Feature specs / use-case definition (hopefully, you most of this is already covered in your PRD!) * Technical readiness * Beta testing coordination (In an ideal world, your beta testing yields some real customer results that can be leveraged by marketing in the initial marketing push) * Success metrics definition * Internal enablement (Sales, Support) - depending on the product/feature, this usually looks like one or more internal training sessions * Post-launch analytics tracking * KPIs: Lower funnel (first use, repeat use, feature adoption, success rate, satisfaction) Product Marketing: * Responsibilities: * Positioning and messaging * Go-to-market strategy * Customer-facing assets (depending on the feature: Case studies, marketing landing pages, blogs, videos, etc) * Launch Comms / Campaigns * Sales collateral * KPIs: Upper funnel KPIs (campaign engagement, landing page/documentation views, customer interest like demos booked or trials initiated) Depending on your org, the rest are likely shared or could fall on either team: * Launch timing and roadmap * Customer research/feedback * Value proposition * Documentation * Launch retrospective Obviously the size and scope of the release will dictate which of these elements are warranted. Smaller teams may have more bleed-over between responsibilities and KPIs, but the important thing is to be clear well in advance who will be accountable for what, work together to define what success looks like, and work together after the launch to assess how well you executed as a team to achieve your goals. Great teams also use each launch as an opportunity to learn what could have gone better (even when it is successful) and iterate in the next launch cycle.
...Read More376 Views
1 request
Triple Whale 🐳 Director of Product Management • November 19
When thinking about KPIs, it starts with understanding your customers and the value you expect to create with your product: 1. Think, first, in terms of outcomes * Outcomes = What does success look like for your users and your business? * A good way to imagine user outcomes is to ask "What would change in our users' lives if this is successful?" * This lays the foundation for working backwards towards useful metrics/KPIs 2. When you think you've found a good metric, ask yourself "So what?" * Our daily active users increased -> "So What?" * More people are logging in -> "So what?" * If you can't find a good answer to "So What" that connects to the actual value for your users or business, then your KPI is a proxy for something else, or its just a bad KPI 3. Pair metrics * KPIs don't exist in a vacuum. Sometimes it takes a combination of metrics / KPIs to get the full picture * For instance "Time in app" alone could be a misleading metric. Someone could be spending a lot more time if they are confused or stuck. But if you pair it with another metric, you have a better triangulation on the real outcome. * For instance, for a task management app, you could combine "Time in app" + "Tasks completed on time" to understand if the time being spent in the app is helping improve the desired customer outcome. 4. When possible, take the time to validate your KPIs with qualitative feedback * A good PM talks to customers: Validate your KPIs with user interviews/real user feedback * If your metrics show success but your qualitative feedback suggests a different story, then it might be time to revisit your KPIs For sure, whenever possible you should be tracking KPIs that indicate user / business value. That said, it is not always so straightforward to track a direct outcome, so if you do find yourself needing a "proxy" metric, follow some of the steps above to at least confirm that it is a good proxy.
...Read More362 Views
1 request