Mani Fazeli

AMA: Shopify Director of Product, Checkout & B2B, Mani Fazeli on Product Management KPI's

December 14 @ 10:00AM PST
View AMA Answers
How do you define and set SLAs with engineers?
I'm currently struggling to define checkout error rates for our e-commerce platform. We're currently at 1.5%. Personally, I think it's too high. However, I have nothing to substantiate my opinion.
Mani Fazeli
Mani Fazeli
Shopify Director of ProductDecember 14
Service Level Agreements (SLA) are driven by three factors: (1) industry standard expectations by customers, (2) differentiating your product when marketing, (3) direct correlation with improving KPIs. For checkout, you'll have uptime as an industry standard, but it's insufficient because subsystems of a checkout can malfunction without the checkout process outright failling. You could consider latency or throughput as market differentiators and would need instrumentation on APIs and client response. With payment failures or shipping calculation failures, you would directly impact conversion rates and trust erosion (hurting repeat buying), which are likely KPIs you care about. So your SLAs need to be a combination of measures that account for all of the above, and your engineering counterparts have to see the evidence that these matter in conjunction. Of the three types, the one that's most difficult to compare objectively is the third. In your question, you mention 1.5% error rates. You could go on a hunt to find evidence that convinces your engineering counterparts that these are elevated vs. competition, or that they're hurting the business. What's more likely to succeed is running A/B tests that attempt to improve error rates and demonstrating a direct correlation with improving a KPI you care about. That's a more timeboxed exercise, and with evidence, you can change hearts and minds. That's what can lead to more rigorous setting of SLAs and investment in rituals to uphold them.
...Read More
3821 Views
3 requests
Mani Fazeli
Mani Fazeli
Shopify Director of ProductDecember 14
Products must have some connection back to profitability, helping to either increase income or reduce costs. You otherwise wouldn't want to make an investment unless you're choosing to make a donation to the greater good (e.g. open source). It's OK if that connection is indirect, and in some cases, even difficult to measure. The latter requires leaders to agree that the approach to measurement is inline with the values and product principles of the company. It's easiest to use examples, and I'll go to the extreme to make my point. Every piece of software has a substrate and lattice work of capabilities upon which all the direct value driving features are built. Take your administrative dashboards, navigation UI, settings pages, notification systems, design systems, and authentication and security features. In the modern web and mobile landscape, it's dubious to think investment in any of these areas can be causal to growth and differentiation. But not meeting the Kano "threshold attribute" means that your product will feel janky and poor quality, which can lead to poor adoption or retention (and good luck with attribution there). Therefore, you need continuous investment just to meet the bar of expectation and that means time away from other KPI driving initiatives. There is no way to get there without the product principles that make space for this type of investment and improvement. Principles have to be paired with health metrics and trip wires that help diagnose the lack of investment (e.g. task completion time, dead clicks, clicks to navigate to common actions, duplicate code, account takeovers due to lack of 2FA, etc.) I learned the phrase "anything worth doing is worth doing well" from Tobias Lütke. At Shopify, we've created a culture where improvements in many of these examples I shared are celebrated and seen as table stakes. The same is true with things like API performance, UI latency, and UX consistency. All of this takes time and investment, and we uphold it as part of the "definition of done" for most projects. We were a much smaller company at Wave, but still made some investments in our substrate to maintain our perception as the easiest to use financial management software for small businesses. Let's circle back to products that are not directly monetized, but also not part of the substrate of software. The technique to measuring impact is about identifying the input metrics that ladder up to higher level KPIs that do ladder up to revenue. For example, the ability to do per-customer pricing is a feature expected of business-to-business (B2B) commerce systems, but not direct-to-consumer (D2C) ones. But no merchant adopts a B2B system for that single feature alone, and to some, that feature may not even matter. So while we measure win/loss reasons from the sales team along with churn reasons, we also measure usage rates of the feature and impact of per-customer pricing on average Gross Merchandise Volume (GMV) per merchant. Put another way, we're looking at the relationship of leading metrics and the KPIs that ladder up to, thus telling us how we should invest further in per-customer pricing.
...Read More
4330 Views
3 requests
Mani Fazeli
Mani Fazeli
Shopify Director of ProductDecember 14
Becoming more KPI driven is a matter of desire and taste. No person, team, or organization attempts to change without believing that behaving differently will result in an improved outcome they care about. It's only possible when leaders buy into how it would improve the success of their teams and business (e.g. profitability, valuation growth, employee engagement, talent retention, positive social impact, etc.) Some companies are steadfast that the use of KPIs should not equate to being data driven everywhere in the company. They prefer to have data informed teams that reserve room for intuition and qualitative insights. There is no right answer here. If we find ourselves with a company that's bought into a shift towards being KPI driven, but is trying to figure out how at the team, group, or division levels, then I'd recommend the following: 1. Have the leaders of the team/group/division define their strategy for a period of time through written outcomes, assumptions, and principles that are most critical to their success. 2. Gather all the data already available and audit it for quality and trustworthiness, then see if you can model your product or business (i.e. in a spreadsheet) to see if the assumptions you've made and outcomes you've articulated can be explained by your data. If not, note what's missing and how you could gather it (and be comprehensive at this stage). 3. Work with your engineering and/or data team to instrument the metrics you need, backfilling where possible. Remember that you'll need continuous energy to ensure your data remains audited and accurate, as data corruption can severely disrupt your KPI-driven organization. 4. Develop a process for regularly collecting, analyzing, and reporting on the chosen KPIs. Without this ritual, your efforts will be for not. Being KPI-driven means knowing and using the data to make decisions. In my experience, to get the flywheel spinning, you need to have weekly rituals that can morph to monthly rituals. These can be augmented with quarterly business reviews. 5. Make sure that the chosen KPIs are easily accessible and understandable to all members of the teams. This may involve creating dashboards or other visualizations to help team members quickly see how the product or organization is performing. Repeat your KPIs at kick-offs, all-hands, town halls, business reviews, and anywhere else you gather. It's only when you think you're over communicating them that you've probably approached a baseline level of understanding of the KPIs, and how they inform decision making, across your company. 6. Provide regular training and support to team members to help them understand the importance of the chosen KPIs and how to use them effectively to improve the org. If you have a wiki, put your tutorials there. Make it mandatory to consume these during onboarding. Offer self-serve tooling. The more people can be involved with the data, the more you'll make this cultural shift. 7. Regularly review and adjust the chosen KPIs to ensure that they are still relevant and useful. Account for any changes in your outcomes, assumptions, and principles. Assess suitability annually. Set targets annually and adjust mid-year. Some companies do this more often, and your culture should dictate what's best. 8. Lastly, make sure that all KPIs have their lower level metrics clearly mapped for the company to see. Teams influence these input metrics more quickly, and the mapping brings clarity to decision making.
...Read More
9718 Views
4 requests
Mani Fazeli
Mani Fazeli
Shopify Director of ProductDecember 14
Setting KPIs should not feel arbitrary. That's a smell. It means that the people choosing those metrics or setting their targets don't clearly understand how they influence the business or the outcomes desired. Perhaps good modeling has never been done to demonstrate either correlation or causation. When it comes to entering new markets, my opinions change. My approach to leadership is to measure and model things that are known or knowable. Entirely new products or markets will, at best, be understandable through competition and alternatives. Collecting this type of data is imperfect: noisy, sparse, highly filtered, and coarse. It's dominated by qualitative information and not quantitative behavioural data. Where you do get quantitative data, you have to be skeptical of how it was gathered and analyzed by the 3rd party vendors (e.g. Gartner, Forrester, McKenzie, etc.) In short, you're flying with minimal visibility and a malfunctioning instrument panel. You need to gain clarity through experimentation. You can absolutely ask yourself some question: 1. What metrics do I think would indicate we're achieving our outcomes? 2. What targets would suggest our outcomes are being achieved quickly enough to be worthwhile our investment? 3. What is a reasonable amount of time to run experiments before we'd expect to see some results? But the KPIs and targets must be motivational and directional at best. They need to be malleable as you ship, research, and learn. Modeling would be pointless, filled mostly with fake confidence and lies. Instead, spend your energies being rigorous with your experimentation methodologies (qualitative and quantitative), think big and start small, and move at pace. Some experiments may take 6+ months to launch so your leadership has to have good intuition based on the sparse data whether this is worthwhile and trust the process. So set goals around how well you run this process until you have enough information to then form an actual longer term strategy where KPI targets go beyond hopes and prayers.
...Read More
1702 Views
3 requests
Mani Fazeli
Mani Fazeli
Shopify Director of ProductDecember 14
Let's cover this in two ways: (1) how to think about KPIs, (2) examples of poor ones and how they can be better. I'll also approach the question a little more broadly than Product Managers alone. Remember that Key Performance Indicators (KPIs) are used at all levels of a company (e.g. project, team, group, division, exec team) with different levels of fidelity and lag (e.g. daily active user vs. quarterly revenue). The appropriateness of standard KPIs will also differ by industry (e.g. commerce software will not rely on daily active users the way social networks do). Finally, many people use the term KPI when they actually just mean metrics (whether input, output, health, or otherwise). As the name suggests, only the metrics that are key to success should be elevated to KPIs, and there should be as few of them as possible. When I see more than 1 from a team, 3 from a group, or 5 from a division/exec team, there are good odds that some can be cut, amalgamated, or otherwise improved. KPIs are, after all, meant to drive decision making and accountability. So what are the criteria of KPIs that stand to be improved, and examples of them? 1. Vanity metrics: these look impressive but doesn't actually measure the success of a product. Examples include the amount of traffic to a website, the number of sign-ups a product has, daily active users for marketplaces that monetize through purchases, or the number of likes across posts on a social network. 2. Poorly instrumented metrics: these are not reliably measured, which can lead to incorrect or misleading conclusions about the effectiveness of a product. For example, if the first step of a conversion funnel (e.g. checkout) has many ingress pathways, and the user can transition in and out of that step before proceeding down funnel, how well your instrumentation deduplicates that first step is critical to your conversion calculations. 3. Lack of attribution to effort: any metric who's fluctuations cannot be explained by the combination of efforts from the team/group using it as a KPI, plus seasonal and random variability, is going to be ineffective. For example, if a common funnel in the company has multiple teams trying to improve its conversion, each team needs to define a KPI that does not overlap the others or they won't know if their efforts resulted in an outcome versus another team's efforts. Note that if all those teams are in the same group (e.g. a growth org), then that group could effectively use the conversion rate as their KPI. When in doubt, or if you're unable to isolate your efforts with lower level metrics, run an A/B test against every major change by each team to get a better (but imperfect) indication of relative contribution. This criteria covers many grey areas as well. Revenue is a prototypically difficult KPI for individual teams to use because of attribution. However, you can find relatively small teams or groups that build add-on products that are directly monetized and expansion revenue can be an excellent KPI for them (e.g. a payroll add-on to an accounting system). 4. Unclear tie to next level's KPI: companies are concentric circles of strategy, with each division, group, and team needing to fit its plans and focus into that of the prior. This includes KPIs, where you'd expect a well modeled connection between lower level KPIs driving higher level ones. For example, say a SaaS invoicing platforms sets an X in Y goal as an activiation hurdle to predict long term retained users (i.e. 2 invoices sent in first 30 days). It would be reasonable to assume that onboarding will heavily influence this. But what about onboarding, specifically, will matter? If a team concots a metric around how many settings are altered in the first 7 days (e.g. chose a template, added a logo, set automatic payment reminders) and wants to use that as their KPI, they'd need to have analyzed and modeled whether that matters at all to new users sending their first 2 invoices. 5. Lagging metrics at low levels: the closer you get down to a team level, the more you want to see KPIs defined by metrics that are leading indicators of success and can be measured without long time delays. Bad KPIs are ones that just can't be measured fast enough for a team to learn and take action. For example, many teams will work to increase retention in a company. But larger customers in SaaS may be on annual contracts. If features are being built to influence retention, it's better to find leading activity and usage metrics at the team level to drive behaviour and measure them weekly or monthly. These can tie into a higher level retention KPI for a group or division, and keep teams from getting nasty delayed surprises if their efforts weren't destined to be fruitful. The only caveat for this criteria is how platform and infrastructure teams measure themselves. Their KPIs are typically more lagging and this topic is deserving of its own write-up. 6. Compound or aggregate metrics: these are made up of multiple individual metrics that are combined using a formula in order to provide a more comprehensive view of the success of a product without needing to analyze many individual numbers. Examples include effectiveness scores, likelihood indicators, and satisfaction measures. Arguably, many high level KPIs behave this way, such as revenue and active users, which is why (3) above is important to keep in mind. However, its formulas that are particularly worrisome. They inject bias through how they're defined, which is hard for stakeholders to remember over time. You find yourself looking at a score that's gone down 5% QoQ and asking a series of questions to understand why. Then you realize it would have been simpler to look at individual metrics to begin with. In my experience, these KPIs lead to more harm than good. 7. Lacking health metrics or tripwires: having a KPI is important, but having it in isolation is dangerous. It's rare for lower level metrics to be improved without the possibility of doing harm elsewhere. For example, in commerce, we can make UX changes that increase the likelihood of conversion but decrease average order value or propensity for repeat purchases. Therefore, a KPI that does not consider tripwires or does not get paired with health metrics is waving a caution flag.
...Read More
6864 Views
5 requests