I’ve worked with developer focused tooling for almost 7 years so I know exactly
what you mean here. On almost every new feature or product our teams put into
motion, we have a huge list of factors to consider such as security, legal, or
billing. For developer tooling, there’s usually no change to pricing or billing,
no new SKU. Does that mean these types of products don't provide value? No way!
Libraries, SDKs, APIs, CLIs, and other developer focused tools are a huge part
of the overall product. They can open your product up in a way to create value
you never imagined.
When it comes to making a business case for investing in a product feature or
measuring value created by these efforts, I look at other ways the business will
be impacted. Will this feature help reduce friction in the sales cycle, or is
this library a market differentiator amongst similar products? Will change this
improve customer sentiment if we roll it out? Will it help accelerate time to
value for our customers? Will it help our customers operationalize including our
product in their stack and ultimately make our product more sticky within an
organization? You can put KPIs, metrics, and even dollar values around so many
of these items.
Product Management KPI's
3 answers
Staff Product Manager, SDKs and Libraries at Twilio • February 8
Executive Vice President Products at Snow Software | Formerly Rackspace, Dell • October 25
When working with products that are not monetized, it is actually more critical
to measure business impact to show the value of your work. Why? Often products
that are not monetized, are the first ones to get resources removed when the
organizaiton finds itself in a bind. This can be a very big mistake for the
organization and why it's critical to get this right.
In this scenario, I'd go back to "what value does your product to your
business?". And I would then tie KPIs to that value. Let's say your product is
an internal tools that helps payroll to be processed faster. Then likely before
your product there were X number of people spent on payroll or Y dollars spent
on a service. I would focus business impact on those dollars saved by having
your product as a reminder of the monetary value your payroll product brought.
Then I'd focus on other items like how you've improved processing speed by
delivery these X features in the business.
Always go with value and then focus on KPIs that showcase that value to your
stakeholders.
Director of Product at Shopify • December 11
Products must have some connection back to profitability, helping to either
increase income or reduce costs. You otherwise wouldn't want to make an
investment unless you're choosing to make a donation to the greater good (e.g.
open source). It's OK if that connection is indirect, and in some cases, even
difficult to measure. The latter requires leaders to agree that the approach to
measurement is inline with the values and product principles of the company.
It's easiest to use examples, and I'll go to the extreme to make my point.
Every piece of software has a substrate and lattice work of capabilities upon
which all the direct value driving features are built. Take your administrative
dashboards, navigation UI, settings pages, notification systems, design systems,
and authentication and security features. In the modern web and mobile
landscape, it's dubious to think investment in any of these areas can be causal
to growth and differentiation. But not meeting the Kano "threshold attribute"
means that your product will feel janky and poor quality, which can lead to poor
adoption or retention (and good luck with attribution there). Therefore, you
need continuous investment just to meet the bar of expectation and that means
time away from other KPI driving initiatives. There is no way to get there
without the product principles that make space for this type of investment and
improvement. Principles have to be paired with health metrics and trip wires
that help diagnose the lack of investment (e.g. task completion time, dead
clicks, clicks to navigate to common actions, duplicate code, account takeovers
due to lack of 2FA, etc.)
I learned the phrase "anything worth doing is worth doing well" from Tobias
Lütke. At Shopify, we've created a culture where improvements in many of these
examples I shared are celebrated and seen as table stakes. The same is true with
things like API performance, UI latency, and UX consistency. All of this takes
time and investment, and we uphold it as part of the "definition of done" for
most projects. We were a much smaller company at Wave, but still made some
investments in our substrate to maintain our perception as the easiest to use
financial management software for small businesses.
Let's circle back to products that are not directly monetized, but also not part
of the substrate of software. The technique to measuring impact is about
identifying the input metrics that ladder up to higher level KPIs that do ladder
up to revenue. For example, the ability to do per-customer pricing is a feature
expected of business-to-business (B2B) commerce systems, but not
direct-to-consumer (D2C) ones. But no merchant adopts a B2B system for that
single feature alone, and to some, that feature may not even matter. So while we
measure win/loss reasons from the sales team along with churn reasons, we also
measure usage rates of the feature and impact of per-customer pricing on average
Gross Merchandise Volume (GMV) per merchant. Put another way, we're looking at
the relationship of leading metrics and the KPIs that ladder up to, thus telling
us how we should invest further in per-customer pricing.
5 answers
Principal PM Manager / Product Leader at Microsoft | Formerly Amazon • January 31
Such a great question! When you first set a KPI especially if you are in a new
market and/or in a new product/customer space, it can feel uneasy. The best way
I have learned is by setting something and tracking it over time, seeing if
there is any measurable change. If not start by marking out the customer's
journey (no matter who they are) and see if you can collect data on their
interactions along the way. This may reveal some hidden trends you weren't yet
measuring.
VP of Products at DOZR • February 20
I would start by understanding what is the company being graded on by its
investors and how is this new product going to deliver/contribute to that KPI.
Let's say your investors are keen on seeing revenue growth this year. You can
begin by benchmarking the lifetime revenue growth of the various product
offerings of your company and then estimate (based on your user research/data)
what will be the adoption rate for this new product area/market. You can then
begin to model out some first year numbers. Of course, this will seem super
arbitrary but truth be told once you launch, your predictions for subsequent
quarters/years will only improve and that's when it's critical that the KPIs are
accurate. Focus on product market fit for your launch and then focus on modeling
according to comperative product growth.
Senior Director of Product Management at Zendesk • April 18
+1 to arbitrary! I think setting goals for both new and existing markets may
feel like excel magic, some numbers that are based on mainly assumptions and the
product manager's gut. Its uncomfortable and may feel unscientific but I think
all forward planning is like that.
I would address some uncertainity in the following way
1. Make sure I am tracking the right metrics, New markets could mean new
metrics, the business is starting from scratch and may be different from the
metrics that a mature market may have. Example in a mature market I would
focus on expansions vs in a new market I'd focus on acquisition
2. Small experiments to inform goals. I cant stress this enough and often see
product managers trying to solve for too much at once. Its important to
setup small experiements, learn from them, adjust goals and then rinse and
repeat
Its not possible to remove uncertainity completely but its definitely possible
to reduce it and get better at making assumptions.
Director of Product Management at GitLab • July 12
This is a great question. I am a fan of actually being ambitious and setting
unrealistic targets or stretch goals and seeing where we end up in the first
couple of reporting periods. To get a dose of realism, you can always reference
analyst reports and right-size your total available market vs. serviceable
market and current penetration into the new market. I don't advise setting
realistic targets when entering new markets because that can lead to complacency
instead of innovation. You can also leverage a range, and use a moderate case
and best case target achievement. For example, if you are offering an existing
product with 1M marketing analyst users to a new persona like project managers,
you would want to evaluate what is the total presence of project managers in
your customer base or industry. If you see that you only have 2K project
managers in the current user base, but there are over 3M project managers in the
industries you are serving, you can set an incremental goal somewhere between 2K
and 3M, but you wouldn't want to set a 3M project manager user target right out
of the gate since you have no idea how many of those users are actually
servicable.
Director of Product at Shopify • December 11
Setting KPIs should not feel arbitrary. That's a smell. It means that the people
choosing those metrics or setting their targets don't clearly understand how
they influence the business or the outcomes desired. Perhaps good modeling has
never been done to demonstrate either correlation or causation.
When it comes to entering new markets, my opinions change. My approach to
leadership is to measure and model things that are known or knowable. Entirely
new products or markets will, at best, be understandable through competition and
alternatives. Collecting this type of data is imperfect: noisy, sparse, highly
filtered, and coarse. It's dominated by qualitative information and not
quantitative behavioural data. Where you do get quantitative data, you have to
be skeptical of how it was gathered and analyzed by the 3rd party vendors (e.g.
Gartner, Forrester, McKenzie, etc.) In short, you're flying with minimal
visibility and a malfunctioning instrument panel.
You need to gain clarity through experimentation. You can absolutely ask
yourself some question:
1. What metrics do I think would indicate we're achieving our outcomes?
2. What targets would suggest our outcomes are being achieved quickly enough to
be worthwhile our investment?
3. What is a reasonable amount of time to run experiments before we'd expect to
see some results?
But the KPIs and targets must be motivational and directional at best. They need
to be malleable as you ship, research, and learn. Modeling would be pointless,
filled mostly with fake confidence and lies. Instead, spend your energies being
rigorous with your experimentation methodologies (qualitative and quantitative),
think big and start small, and move at pace. Some experiments may take 6+ months
to launch so your leadership has to have good intuition based on the sparse data
whether this is worthwhile and trust the process. So set goals around how well
you run this process until you have enough information to then form an actual
longer term strategy where KPI targets go beyond hopes and prayers.
I'm currently struggling to define checkout error rates for our e-commerce platform. We're currently at 1.5%. Personally, I think it's too high. However, I have nothing to substantiate my opinion.
2 answers
Principal PM Manager / Product Leader at Microsoft | Formerly Amazon • January 31
I would recommend first building a relationship with your technical
lead/engineering counterpart. Have them show you how your e-commerce platform
(or product area) works end to end, from the backend perspective. Make sure that
you first understand the end to end flow and specifically the systems design
(which is critical in any e-comm platform). Once you understand how the
customer's journey equates to the systems design, then start looking into each
customer interaction with the site and make sure your team is tracking those
metrics. You will end up at the checkout rates. If you have a good pulse on SQL
or pulling and analyzing data, you could probably do the error rate comparison
on your own. If you don't feel comfortable, work with your engineering lead (or
data analyst) to dig into those numbers. Build out a report or dashboard that
you can look at a regular basis. This will give you the background to ask or
share opinions.
Director of Product at Shopify • December 11
Service Level Agreements (SLA) are driven by three factors: (1) industry
standard expectations by customers, (2) differentiating your product when
marketing, (3) direct correlation with improving KPIs.
For checkout, you'll have uptime as an industry standard, but it's insufficient
because subsystems of a checkout can malfunction without the checkout process
outright failling. You could consider latency or throughput as market
differentiators and would need instrumentation on APIs and client response. With
payment failures or shipping calculation failures, you would directly impact
conversion rates and trust erosion (hurting repeat buying), which are likely
KPIs you care about. So your SLAs need to be a combination of measures that
account for all of the above, and your engineering counterparts have to see the
evidence that these matter in conjunction.
Of the three types, the one that's most difficult to compare objectively is the
third. In your question, you mention 1.5% error rates. You could go on a hunt to
find evidence that convinces your engineering counterparts that these are
elevated vs. competition, or that they're hurting the business. What's more
likely to succeed is running A/B tests that attempt to improve error rates and
demonstrating a direct correlation with improving a KPI you care about. That's a
more timeboxed exercise, and with evidence, you can change hearts and minds.
That's what can lead to more rigorous setting of SLAs and investment in rituals
to uphold them.
3 answers
Principal PM Manager / Product Leader at Microsoft | Formerly Amazon • January 31
What a terrific question! This is one takes time and depending upon your
organization will require patience.
The best way I have learned, is to ask "why" a particular feature is being
prioritized, and the "impact" it is bringing to the customer. The best way to
create change is to start with your own product team (even if that consists of
you as the PM, engineering, and UX/UI). For every epic/story, make sure to note
down the KPI that it will impact, and why that is important to your target (end)
customer.
After a few cycles (e.g. sprints if you follow the agile approach), share the
results of how your product work (launched features) impacted the KPI you were
measuring, and why that is critical to your overall business.
Evangelize this within your org, and see if there are one or two other teams who
are interested in following suite.
The more you share and report broadly, the easier it is to create change
internally.
Executive Vice President Products at Snow Software | Formerly Rackspace, Dell • October 25
To make a product team/organization more KPI driven, it is always best to start
at the top. The Head of Product at your organization should provide the broader
PM organization with the strategic context of what business objectives are
essential for your team to hit.
From there the PM team can take that business objective and break it down into
their own objective and key result.
If you are a product leader seeking to make your organization more KPI driven,
what I would do is identify what metrics matter to your business and put those
into individual PMs goals. For instance, let's say it's critical for
organization to ship Product Y at the end of Q2. PMs Bob and Anna are
responsible for Product Y. Then I would make it part of Bob and Anna's
individual performance goals that they deliver Product Y by the end of Q2. Of
course, there are going to be many reasons why Bob and Anna may not be able to
deliver Product Y (the big one being engineering or design). And while these
reasons may be true by clearly stating in their performance goals that they are
accountable for that deliver this ties Bob and Anna to ensuring that outcome
occurs and that they keep you posted on what needs to be done to reach that
outcome.
If you are an individual contributor seeking to make your organization more KPI
driven, I would start with your own product. Ensure that you are aware of the
key metrics your organization values (e.g. Annual Recurring Revenue, Daily
Active Users, etc.) and determine what key results your product needs to hit to
help your team reach that outcome. I'd then focus on sharing your KPIs with your
manager and peers in engineering, user experience, and product marketing for
alignment. With KPIs aligned, I'd regularly update others on your metrics and
show progress. Others will notice and likely start requesting that your peers
follow.
Director of Product at Shopify • December 11
Becoming more KPI driven is a matter of desire and taste. No person, team, or
organization attempts to change without believing that behaving differently will
result in an improved outcome they care about. It's only possible when leaders
buy into how it would improve the success of their teams and business (e.g.
profitability, valuation growth, employee engagement, talent retention, positive
social impact, etc.) Some companies are steadfast that the use of KPIs should
not equate to being data driven everywhere in the company. They prefer to have
data informed teams that reserve room for intuition and qualitative insights.
There is no right answer here.
If we find ourselves with a company that's bought into a shift towards being KPI
driven, but is trying to figure out how at the team, group, or division levels,
then I'd recommend the following:
1. Have the leaders of the team/group/division define their strategy for a
period of time through written outcomes, assumptions, and principles that
are most critical to their success.
2. Gather all the data already available and audit it for quality and
trustworthiness, then see if you can model your product or business (i.e. in
a spreadsheet) to see if the assumptions you've made and outcomes you've
articulated can be explained by your data. If not, note what's missing and
how you could gather it (and be comprehensive at this stage).
3. Work with your engineering and/or data team to instrument the metrics you
need, backfilling where possible. Remember that you'll need continuous
energy to ensure your data remains audited and accurate, as data corruption
can severely disrupt your KPI-driven organization.
4. Develop a process for regularly collecting, analyzing, and reporting on the
chosen KPIs. Without this ritual, your efforts will be for not. Being
KPI-driven means knowing and using the data to make decisions. In my
experience, to get the flywheel spinning, you need to have weekly rituals
that can morph to monthly rituals. These can be augmented with quarterly
business reviews.
5. Make sure that the chosen KPIs are easily accessible and understandable to
all members of the teams. This may involve creating dashboards or other
visualizations to help team members quickly see how the product or
organization is performing. Repeat your KPIs at kick-offs, all-hands, town
halls, business reviews, and anywhere else you gather. It's only when you
think you're over communicating them that you've probably approached a
baseline level of understanding of the KPIs, and how they inform decision
making, across your company.
6. Provide regular training and support to team members to help them understand
the importance of the chosen KPIs and how to use them effectively to improve
the org. If you have a wiki, put your tutorials there. Make it mandatory to
consume these during onboarding. Offer self-serve tooling. The more people
can be involved with the data, the more you'll make this cultural shift.
7. Regularly review and adjust the chosen KPIs to ensure that they are still
relevant and useful. Account for any changes in your outcomes, assumptions,
and principles. Assess suitability annually. Set targets annually and adjust
mid-year. Some companies do this more often, and your culture should dictate
what's best.
8. Lastly, make sure that all KPIs have their lower level metrics clearly
mapped for the company to see. Teams influence these input metrics more
quickly, and the mapping brings clarity to decision making.
7 answers
Principal PM Manager / Product Leader at Microsoft | Formerly Amazon • January 31
This could really range based upon the company, your users, your target goals,
where you are in your business lifecyle, etc.
The most basic ones are: acquisition, activation, retention, revenue, referral
You could also be measuring customer lifetime value
In regards to the worst KPIs, honestly those that cannot be discretely measured
and tracked over a specific time period. Vanity metrics (e.g. the number of
views from a marketing article or number of shares of a post) really add no
value.
Staff Product Manager, SDKs and Libraries at Twilio • February 10
The worst KPIs to commit to are the ones you can’t commit to at all. We can set
targets and metrics and make dashboards, but that’s exactly what they are -
targets. I recommend looking at past performance and trends within the data and
setting a realistic yet aspirational target to work towards. After that, begin
iterating on your target. Revisit the KPI, analyze, adjust, and communicate your
findings.
VP of Products at DOZR • February 20
KPIs around delight unless this is your key product differentiator (which is
proven to be compelling to customers). Focus on building an intuitive and
effective product experience that users would want to recommend to their
friends/colleagues. Focusing on the final pieces of polish such as interactions,
delight, animations, etc are fluff until you're really providing value to your
customers. This is why keeping your KPI or success metrics concise and essential
will allow you to provide the most impact to customers.
Senior Director of Product Management at Zendesk • April 18
1. Rates: To me without absolute numbers, rates may paint a false picture. Let
me explain with an example. Lets say you have a trial experience for your
product and you are responsible for the cart experience and thereby
conversion rates which is measured by number of paid customers/number of
trialers. I would suggest that instead of rates the north star metrics
should be a combination of number of paid customers as well as Average Deal
Size (ADS) per paid customer. A conversion rate is a good number to track
but may lead to wrong hypothesis when you see not normal trends in either
your numerator or demonimator. In this example, conversion rate will go down
if the number of trials significantly increases or decreases. If you are
responsible for rates make sure that you own both the denominator and the
numerator of that equation in order to truly able to influence a positive
change
2. Fuzzy metrics: As the say "What cant be measured, cant be managed". Metrics
that are not explicit like Customer Happiness and NPS are not possible to
measure. I'd go a step further to challenge why should one even measure
those metrics and are there any core metrics that can be measure
successfully. An example would be instead of measuring Customer Happiness,
lets measure Customer Expansion because happy customers expand.
Director of Product Management (Cloud Platform) at Atlassian • May 8
Some of the worst KPI's in my opinion are:
* KPI's that cannot be measured correctly
* KPI's that do not give a sense for the goal you are tracking. You can use the
AARRR (Acquisition, Activation, Retention, Revenue, Referral) framework to
understand the best metrics you can choose to align with your outcome/goal.
* KPI's that are not achievable in a desired timeframe. Yes there could be
exceptions here but generally these are not the best ones in my opinion.
* Any KPI's that do not really tell you the health of the business unless a
holistic picture is presented. e.g. number of app installs is not that
meaningful without retention and engagement metrics
Director of Product Management at GitLab • July 12
The worst KPIs are vanity metrics that have no ties back to actual adoption or
business metrics. I once had a product manager commit to hitting a number of
emails a notification system was supposed to send in a 30-day period. Without
context, this seems like a great metric to track for volume, except the total
count of emails tells you nothing about how many people are getting value, if
they are getting value, if they are recurring users, or if the emails are
contributing to user satisfaction. This can certainly be a metric in the
toolkit, but not a KPI for a product line.
Director of Product at Shopify • December 10
Let's cover this in two ways: (1) how to think about KPIs, (2) examples of poor
ones and how they can be better. I'll also approach the question a little more
broadly than Product Managers alone.
Remember that Key Performance Indicators (KPIs) are used at all levels of a
company (e.g. project, team, group, division, exec team) with different levels
of fidelity and lag (e.g. daily active user vs. quarterly revenue). The
appropriateness of standard KPIs will also differ by industry (e.g. commerce
software will not rely on daily active users the way social networks do).
Finally, many people use the term KPI when they actually just mean metrics
(whether input, output, health, or otherwise). As the name suggests, only the
metrics that are key to success should be elevated to KPIs, and there should be
as few of them as possible. When I see more than 1 from a team, 3 from a group,
or 5 from a division/exec team, there are good odds that some can be cut,
amalgamated, or otherwise improved. KPIs are, after all, meant to drive decision
making and accountability.
So what are the criteria of KPIs that stand to be improved, and examples of
them?
1. Vanity metrics: these look impressive but doesn't actually measure the
success of a product. Examples include the amount of traffic to a website,
the number of sign-ups a product has, daily active users for marketplaces
that monetize through purchases, or the number of likes across posts on a
social network.
2. Poorly instrumented metrics: these are not reliably measured, which can
lead to incorrect or misleading conclusions about the effectiveness of a
product. For example, if the first step of a conversion funnel (e.g.
checkout) has many ingress pathways, and the user can transition in and out
of that step before proceeding down funnel, how well your instrumentation
deduplicates that first step is critical to your conversion calculations.
3. Lack of attribution to effort: any metric who's fluctuations cannot be
explained by the combination of efforts from the team/group using it as a
KPI, plus seasonal and random variability, is going to be ineffective. For
example, if a common funnel in the company has multiple teams trying to
improve its conversion, each team needs to define a KPI that does not
overlap the others or they won't know if their efforts resulted in an
outcome versus another team's efforts. Note that if all those teams are in
the same group (e.g. a growth org), then that group could effectively use
the conversion rate as their KPI. When in doubt, or if you're unable to
isolate your efforts with lower level metrics, run an A/B test against every
major change by each team to get a better (but imperfect) indication of
relative contribution. This criteria covers many grey areas as well. Revenue
is a prototypically difficult KPI for individual teams to use because of
attribution. However, you can find relatively small teams or groups that
build add-on products that are directly monetized and expansion revenue can
be an excellent KPI for them (e.g. a payroll add-on to an accounting
system).
4. Unclear tie to next level's KPI: companies are concentric circles of
strategy, with each division, group, and team needing to fit its plans and
focus into that of the prior. This includes KPIs, where you'd expect a well
modeled connection between lower level KPIs driving higher level ones. For
example, say a SaaS invoicing platforms sets an X in Y goal as an
activiation hurdle to predict long term retained users (i.e. 2 invoices sent
in first 30 days). It would be reasonable to assume that onboarding will
heavily influence this. But what about onboarding, specifically, will
matter? If a team concots a metric around how many settings are altered in
the first 7 days (e.g. chose a template, added a logo, set automatic payment
reminders) and wants to use that as their KPI, they'd need to have analyzed
and modeled whether that matters at all to new users sending their first 2
invoices.
5. Lagging metrics at low levels: the closer you get down to a team level, the
more you want to see KPIs defined by metrics that are leading indicators of
success and can be measured without long time delays. Bad KPIs are ones that
just can't be measured fast enough for a team to learn and take action. For
example, many teams will work to increase retention in a company. But larger
customers in SaaS may be on annual contracts. If features are being built to
influence retention, it's better to find leading activity and usage metrics
at the team level to drive behaviour and measure them weekly or monthly.
These can tie into a higher level retention KPI for a group or division, and
keep teams from getting nasty delayed surprises if their efforts weren't
destined to be fruitful. The only caveat for this criteria is how platform
and infrastructure teams measure themselves. Their KPIs are typically more
lagging and this topic is deserving of its own write-up.
6. Compound or aggregate metrics: these are made up of multiple individual
metrics that are combined using a formula in order to provide a more
comprehensive view of the success of a product without needing to analyze
many individual numbers. Examples include effectiveness scores, likelihood
indicators, and satisfaction measures. Arguably, many high level KPIs behave
this way, such as revenue and active users, which is why (3) above is
important to keep in mind. However, its formulas that are particularly
worrisome. They inject bias through how they're defined, which is hard for
stakeholders to remember over time. You find yourself looking at a score
that's gone down 5% QoQ and asking a series of questions to understand why.
Then you realize it would have been simpler to look at individual metrics to
begin with. In my experience, these KPIs lead to more harm than good.
7. Lacking health metrics or tripwires: having a KPI is important, but having
it in isolation is dangerous. It's rare for lower level metrics to be
improved without the possibility of doing harm elsewhere. For example, in
commerce, we can make UX changes that increase the likelihood of conversion
but decrease average order value or propensity for repeat purchases.
Therefore, a KPI that does not consider tripwires or does not get paired
with health metrics is waving a caution flag.
6 answers
Principal PM Manager / Product Leader at Microsoft | Formerly Amazon • January 31
Interestingly enough I see two trends in the types of KPIs product teams miss.
1) Aligning with the larger's organization or business goals - Ensuring that
your product roadmap is actually impacting the success metrics (OKRs, KPIs) of
the business itself is critical to knowing if you are investing in and
prioritizing the right work.
2) Capturing "technical or engineering" metrics - Any work that your team spends
time on should be impacting some metric. Even metrics that are technical (the
most common one being latency) should be captured, reported on, and measured
over time.
Director of Product Management (Cloud Platform) at Atlassian • May 8
* In terms of KPI's shared between product and engineering, I would say
"Effective Resource Utilization" can be missed primarily because it can be
hard to track and measure across projects/teams.
* "Internal team satisfaction" is another one that PM's may not include but
this is an extremely important metric that provides a good idea of the health
of the team and organization. This should not be missed.
Director of Product Management at GitLab • July 12
Many product organizations focus on delivery, Net Promoter Score, and user
counts. One metric that I think is important to always consider is your
availability and consistency of user experience in the performance of the
application (latency). Using Error Budgets and thinking about uptime critically
as a Product Manager helps put into tangible terms the cost to the user when
your offering does not meet a performance or uptime standard. If you are
offering mission-critical software, it is essential to be responsive and
reliable. Lack of responsiveness and reliability can erode your base over time.
Group Product Manager at Google • August 16
This is a good one. I think there are two that often get missed and largely it
is because they are hard to measure and expensive to move.
1. Product excellence. How do you measure customer delight in an impactful way?
CSAT and NPS have lots of opportunities to be gamed and are frankly easily
ignored. Some of the best products I've used focus on finding the right
critical user journeys and continuously measure the success rates of those
quantitatively and qualitatively
2. Product health. Cold boot, warm boot, latency for critical actions, crashes,
uptime. All of these things contribute to Product excellence but are much
more directly measurable and can really sneak up on you
Executive Vice President Products at Snow Software | Formerly Rackspace, Dell • October 25
Oftentimes, I find that Product Management teams are focused on getting the
product to market that they forget that their #1 job is building a business. As
a business leader, you can't be simply focused on the "speeds & feeds" or what
next feature needs to be on the roadmap. You really have to understand the
product's core value proposition: why would a customer choose your product over
the existing way the problem is being solved today? And from here, how do you
plan to monetize and scale the solution?
Several PMs, like me, have engineering backgrounds. This is great because
engineers are deep thinkers but the downside is our problem-solving nature can
force us to forget the commercial side of our job. We're here to build products
that generate revenue, retain cusotmers, and drive profitability. If we can't
tie back to those things, we're missing a big part of the job.
Sr. Director, Product Management at Mezmo • December 13
This is a great question. In my opinion, a lot of product teams especially ones
that are focussed on customer-facing products completely miss tracking product
health KPIs like bugs in the product, product uptime, and product reliability to
name a few. Most product teams miss them as this product health metric is not
tied directly to any business objective and is considered an engineering vanity
metric. The reality however is this KPI is really critical and can inform
customer experience that can translate to customer satisfaction and retention.
3 answers
Senior Director of Product, Central Technology at Zynga • August 1
Be adaptable to change - don't be afraid to try different things or change a
process that is no longer working for the team. For a team that is growing
quickly oftentimes process gets ignored because the team is so focused on
delivering/executing that they feel process may slow them down. In some cases
little to no process can be a good thing, but for a team that is scaling it can
be detrimental. What I try to do is understand how they are using product
management processes today and then try to understand where product can add the
most impact short term and demonstrate early wins to help build trust. Getting
buy-in from the team is essential, and typically I try to message that I am
adaptable and my goal is to help the team's ability to execute and focus on the
right things rather than add more things to their plate.
I entered a team that was doing well from a revenue standpoint but was
constantly putting out fires and as a result didn't have much time to focus on
feature development to improve and grow the product - it was just maintaining
with a bit of decline. I focused first on root cause analysis of the
fires/incidents from the previous 60 days and found most of them could have been
avoided by introducing a lightweight process before a content release. We
piloted it for 1 team for couple sprints and found it reduced incidents by about
50%, so we rolled it out to other teams and got similar results. This in turn
freed up engineers from firefighting, and also helped me build trust with the
team. In general my motto was "what we have now isn't working, let's try X for a
few sprints and if it's not working we'll try something else." The general
pushback I usually got was from engineers who wanted to avoid meetings or
unnecessary overhead - however, I found that if I could show that I was flexible
and what I was doing would ultimately save them time/effort, I found most to be
amenable to change.
Another tool that helped me was retrospectives and post-mortems, and advocating
for better processes during these times. Oftentimes in these situations people
are open to discussing improvements, and if you can clearly tie in how a product
process can help avoid/solve a problem you can get buy in easily, particularly
if you are offering to drive it. Better yet, if you can circle back with results
and outcomes that show how this benefited the team (fewer CS tickets, better
velocity, etc) you can help demonstrate the value that product can provide.
Director of Product Management at Aurora Solar • October 16
I'm a strong believer in "just enough" process, so my answer to questions like
this is always some version of "it depends"!
The one piece of process that every team must have is a way to reflect on, and
incrementally improve, they way they work together. You can call this process
whatever you like - "reflection", "retrospective", "after-action review",
"post-mortem" - but it is imperative to building an effective team.
This practice will inform the way you introduce and evolve processes as your
team grows.
Here's the lightest-weight retro format I've ever used, which is a great way to
get started:
Format: individual stickies brainstrom (one idea per sticky)
Time: 30 minutes to one hour
Frequency: Every one to two weeks
Method:
* Each person responds to the "I like" and "I wish" categories on individual
stickies - timebox to 3-5 minutes
* Each person shares their stickies
* Affinity mapping: cluster similar ideas together
* If needed: dot vote on the things that are most important. Be careful not to
always prioritize the concerns of one group, though (eg, if there are more
engineers than any other role, how will you ensure you also address concerns
from other roles?)
* Discuss your highest priority topics, with a focus on understanding the cause
and generating ideas to make it better. Retro isn't a venting session!
* Agree on the things you'll try, and if appropriate, who's responsible for
making it happen.
Categories:
* I like - the things that worked well in the preceding week or two
* I wish - the things that you wish were different
* We will - the group's commitment to change over the next week or two
There is literally an entire book on this topic: Agile Retrospectives, by Esther
Derby, Diana Larsen, Ken Schwaber.
Sr. Director, Product Management at Mezmo • December 12
When joining a small company with no or little structure or joining a small but
growing product team, it is essential to understand the current state of the
product management process before establishing new processes. Rushing to make
changes and establish processes that have worked for you in the past is not
ideal. What worked in one company may not work here due to differences in the
culture and how teams have been set up.
In your first few days, you should set up a meet and greet meeting with
stakeholders from product, engineering, customer success, sales, and marketing.
Use this time to introduce yourself, understand their working style, and get a
clear understanding of what is working, what is not working, and what are their
expectations from your product management. Once you have collected all the
information, synthesize it to form an opinion on the operating procedures you
want to put in place that will help meet the objectives. Share your plans,
collect feedback, and iterate and formalize the process. By using this shared
way to drive change, you are bringing everyone along instead of dictating how
things should be done. Keep in mind more is less and there is no shame in
iterating if a certain process does not work the way you expected or you have
outgrown the process. Processes are there to help establish structure and make
things painless and repeatable for everyone.
Based on my experience working at a small company with little to no structure,
here are some areas where you will need to establish operating principles.
1. Define clear roles and responsibilities - Having clear ownership defined
within the team prevents duplicated effort and stepping on each other's
toes. Every product manager on the team knows what their charter is and has
the needed space to operate.
2. Articulate clear product vision and strategy - This is extremely important
to align and rally the team towards common goals and objectives.
3. Create product roadmap - A roadmap is important to drive the team towards
common outcomes and provide a reference point for decision-making. Without a
roadmap, no one knows where we are headed and everyone makes their own
assumptions.
4. Outline the product development process - This is critical to ensure the
team is working efficiently and effectively and has a common understanding
of what it takes to take a product from inception to launch.
5. Establish effective collaboration and communication - The product team works
with stakeholders across the company and setting up collaboration and
communication processes and tools in place will allow keeping engineering,
marketing, customer success, sales, and support on the same page.
2 answers
Executive Vice President Products at Snow Software | Formerly Rackspace, Dell • October 25
Great question.
To create a powerful partnership between Product Marketing (PMM) and Product
Management (PM) it's essential for you to have a common set of KPIs that help
you understand whether your work is generating the intended results.
Warning: For this to work best, it's great for these KPIs to be established as
early as you can in the product lifecycle. PMs who wait until the last-minute
(think 1-2 sprints before a feature/product will be released) risk not getting
the most from this partnership.
Here's a process I would follow:
1. Provide the PMM sufficient strategic context of your product (e.g. target
customer, intended outcomes product will deliver, business case/written
narrative)
2. With the strategic context understood, determine which go-to-market (GTM)
metrics matter most in your organization (e.g. Monthly Recurring Revenue, Daily
Active Users, etc.) These are the metrics that your leaders would look when
gauging whether your product is successful post-launch. Often, this data can be
found in #1.
3. With objectives (e.g. what you want to see happen if you everything went as
planned) defined in #2, align with PMM on who owns what. For example, do you
co-own Daily Active Users (DAU)? Or is DAU only owned by PMM? If DAU is owned by
PMM, what key results (e.g. the actions that need to be taken to get to a
favorable objective) need to happen for the output to be true (e.g. # of signups
need to be X to get to a DAU of Y).
4. Now that you've documented the objectives, key results, and ownership then
discuss how you progress will be communicated between PM and PMM and the broader
organization. Here it's important ot establish cadence and who communicates
what.
Sr. Director, Product Management at Mezmo • December 12
Product teams that don't have strong collaboration and communication with the
product marketing team fail to deliver a delightful customer experience. These
siloed product and marketing departments that do not work together result in
failed customer expectations. Marketing teams without having proper knowledge of
actual product capabilities will end up building messaging and running campaigns
that either overpromise or underpromise products to the customers resulting in
mismatched customer expectations. To provide the best customer experience and
grow a product, it is critical for product management and product marketing
teams to work closely together.
I believe product managers (PM) and product marketing managers (PMM) are
doppelgangers - product managers build products that solve customer problems and
product marketing managers help take the product to prospective buyers and help
them understand its value. They need to be in lockstep and the best way to align
and hold each other accountable is via shared KPIs.
Once the KPIs are defined and agreed upon by PM and PMM, it is important to
establish clear communication and collaboration channels. This can involve
regular meetings (synchronous) and/or asynchronous collaboration via Slack to
discuss the progress and goals of the launch or research project. Another way to
keep both teams accountable for their tasks is the creation of shared documents
and dashboards to track the KPIs and ensure that everyone is on the same page.
It can also be helpful to establish a directly responsible person (DRI) on each
team to facilitate communication and coordination. Additionally, making sure key
stakeholders from both teams are kept informed of the decisions and involved as
necessary can help ensure that the KPIs are aligned with the overall goals of
the organization and the product.
7 answers
Staff Product Manager, SDKs and Libraries at Twilio • February 8
The partnership with Product Marketing is one of the most important functions
when it comes to rolling out a successful product. Don’t read too much into the
last part of that statement though, a Product Marketing Manager (PMM) is a
crucial teammate to include from the start, not just at launch time. If I know
who I will be working with in advance, I tag them in Product Requirements
Documents or other important materials. Sharing context from the beginning is so
important!
As far as the KPIs go, we are really talking about how you measure the success
of a product, and Product Management and Product Marketing should be aligned on
this. If you're the Product Manager, you’re ideally setting the big picture
targets and KPIs for the initiative, and working with a cross functional team,
such as PMM, Engineering, and Product Ops, to make sure these are shared
targets. Part of the product rollout might include working with PMM on a webinar
to drive awareness, and the PMM will likely have their own targets for things
like webinar attendance, and all of these items build on each other. Ideally the
webinar drives awareness, which in turn helps drives adoption.
While there are different specific metrics that marketing and product teams
track for product launches, what's critical is the alignment between the two and
agreement on the metrics to track prior to the launch.
Some examples of metrics tracked by each team:
* Product team: Satisfaction, usage by users and individual accounts, full
funnel from a user trying the feature to actually using it
* Marketing team: % of reps enabled on the new product, leads generated,
competitive win rate changes
* Metrics that require deep partnership: Number of customer stories/references
for the capability
Director of Product Management (Cloud Platform) at Atlassian • May 8
* Just like Shared KPI's between Engineering and Product Management, a similar
shared KPI framework between Product Management and Marketing is a great way
to build high quality products that bring customer delight and high ROI. This
also builds camaraderie and encourages teamwork towards a common goal between
these teams.
* Generally PM's would own Company, Business, Acquisition, User Engagement and
User Satisfaction KPI's. Examples are: MRR, Churn rate, Number of users,
DAU/WAU/MAU, Number of sessions, Session duration, Churn rate, NPS or CSAT
etc.
* While Marketing may own Leads generated, Leads conversion to customer, CAC,
LTV, Ad spend and Conversion for various channels being used.
* Although the KPI's would be distinct, certain KPI's like LTV, conversion to
customer, user onboarding, retention rate and NPS/CSAT can be shared between
the 2 functions. It really depends on the case and business needs
* As for launch metrics, they could be pre-launch sign-ups, K-factor, social
media engagement, user signups/conversion, revenue and retention. In my
opinion, the last 3 signups/conversion, revenue and retention can be shared
while others can be owned by marketing. This is in addition to the business
and engagement metrics that are probably solely owned by PM's
VP Product at CookUnity • June 17
I love this question because it's something I'm currently involved with at
CookUnity. At a B2C company the marketing team is a Product Manager's best
friend, especially in the early startup days. You can build the greatest product
in the world but there are only so many Field of Dreams "if you build it they
will come" success stories. You need someone out there telling your product's
story and building awareness, hence the PM's need for a great relationship with
the Marketing organization.
Launching a new product, a new major feature release, or even a rebrand requires
a collaborative dance with many different functions - especially Marketing and
Engineering. Host a workshop with your marketing partners, make sure to draw
lines in the sand with launch responsibilities and align on the timeline of
events. A product launch can be thought of as a project, so you should treat it
like one. Map out the technical and creative deliverables, document
dependencies, assign owners to each, align on the timelines and get a thumbs up
from everyone in the room before moving forward. Communication throughout is
key! A product launch is supposed to be fun, exciting and many times the first
touchpoint you have with new users - it's gotta be a smooth and memorable first
experience. Product Launch KPIs should obviously be success oriented, it's about
gaining reach: usage, new registrations, reactivations, engagement, etc. The
marketing team may have their own metrics to reach eyeballs or ears (from my
Spotify days), but if the eyeballs don't turn into new users and increased
engagement then the mark was likely missed.
Group Product Manager at Google • August 16
Oooohhhh, this is a good one and something I spend a lot of time balancing.
Product owns the product and at the end of the day both gets unearned credit and
unearned blame. It is your neck on the line for the end to end experience.
Thinking through the experience that users get both in product and out of
product (traditionally the domain of marketing) is well within Product's scope.
That being said, as you expand your career and scope you'll find more and more
that you don't scale. Not just in terms of time but in terms of expertise.
Figuring out a structure that lets you contribute to marketing efforts without
owning them is your best bet for the future.
The final thought here is be generious with credit. There's no limit to the
number of disciplines that can get credit for accomplishments. Give Marketing
unearned credit for things mostly drive by Product and what you'll get back is
much more willingness to get your guidance and willingness to collaborate.
Director of Product Management at Aurora Solar • October 16
I generally think in terms of OKRs rather than KPIs, so here let's agree that we
are talking about some shared measure of success!
On our teams, Product Marketing is responsible for all communication about the
product or feature to people outside the Product-Eng-Design org. They're our
liaison to the outside world! That includes:
* Creating campaigns and collateral for customers
* Developing positioning documents
* Developing and rolling out enablement materials for go-to-market teams
* Implementing campaigns and marketing events (eg, webinars, content marketing,
etc)
In terms of shared goals, product and product marketing have their own metrics
that they use to track success. Where we need to get aligned is around
higher-level organizational goals. For instance, I once worked with a team who
launched a highly requested feature designed to increase conversions by existing
users. After launch, the conversion metric didn't increase as expected. However,
the team also noticed that surprisingly few customers were trying the new
feature. The PMM on the team immediately kicked off a piece of work to rethink
enablement and customer messaging around the feature, while the Product team dug
into whether the feature wasn't solving the problem as intended. As the two
disciplines tackled the problem from different angles, they were able to create
more insights into what incremental improvements could be made, and drive more
value for customers.
Director of Product Management at GitLab • December 1
I think this is one of my favorite ways of categorizing different kinds of
product managers - based off of the KPI’s that they are being optimized for.
Product managers who are focused on building products, fast, and shipping them
to market are going to be measured a little bit differently than product
managers who are about getting sales, reaching a particular enterprise market or
even a product lead growth. For example, when you think about KPIs of one who is
focused on marketing activities versus focusing on product management
activities, I would split the PIs down the middle from the business metrics
which are revenue versus leads. The product managers, who are focused on
marketing activities should be responsible for what are the opportunities that
are generated from the product whether that’s product lead people coming from
the website, conferences, or just customer interviews. Product managers who are
focused on shipping products fast and getting things to market, should be
measured on that exact thing. So I typically look at the cycle time of features.
How long does it take a product feature from ideation all the way through
product delivery and then of course product usage at the end of it all. Both
product managers are going to be held accountable for the product usage of their
portfolio. These PIs are typically the secondary indicators to be focused on how
well they’re accomplishing those monthly active users targets whether it’s by
generating to have new usage or focusing on enhancing the product suite.
4 answers
Principal PM Manager / Product Leader at Microsoft | Formerly Amazon • January 31
This is a hard one as I am sure there are a ton of layers to unpack here.
Whenever there is a question around metrics, I would first look to the customer
and understand what customer pain points your product area is solving for. Then
see how those needs and your business goals align, and how your specific area
can help solve for that. If it is a matter of stakeholder management that is a
different story, but engineering, product and design should really have shared
KPIs.
Senior Director of Product Management at Zendesk • April 18
Usually I would begin with understanding
1. What are the key customer pain points that I am trying to solve for your
customer? Those are my metrics in 9 out of 10 cases
2. Why is my product team funded? What problems am I solving for the business?
Once I have the initial list, just like all things product management I
PRIORITIZE. What matters the most vs what is not as important.
Now for every item in the list its also crucial to think through what are the
counter metrics. A crude example would be, I want to have more paying customers
but a counter metric will be revenue from these paying customers. Lets say, I
discount my product enough that sky rockets the number of paying customers, a
good check and balance would be the total revenue we are getting from these
customers.
Director of Product Management (Cloud Platform) at Atlassian • May 8
Here is a rough process I would follow but it really varies a lot depending upon
each business:
* Understand Company Objectives and Goals
* Have a clear Product Vision and Strategy that aligns with these
goals/objectives
* Create higher level OKR's that can map to KPI's
* Determine the top KPI's the company is interested in driving/moving. Examples
are: Business Performance KPIs: Customer counts, Customer / user acquisition,
Retention Rate, Churn Rate, Revenue etc.
* Make a prioritized list of these KPI's you can measure. Example Revenue would
map to MRR and so on
* Pick top 1-2 KPI's that you will meaningfully impact
* Ensure they are measurable in the given timeframe
* The roadmaps that PM's own should be aligned to these OKR's and KPI's
* Report on progress regularly
Executive Vice President Products at Snow Software | Formerly Rackspace, Dell • October 25
A good framework I use follows the product adoption lifecyle curve:
* At Introduction (think MVP) the main objective is establishing product-market
fit.
* At Growth you need to shift objectives to focus on maximize growth & share.
If you're not profitable at this stage, focus on getting to profitability.
* At Maturity, maximize profit and aim to extend the lifetime of the product
through diffentiation or adjacent products/segments.
* At Decline your focus is to remain profitable and transition cusotmers to
what is next.