Get answers from product management leaders
I will share first two steps that I follow. Step 1: Is this problem worth solving? 1.1 Problem definition and user segmentation * B2B product: A business customer must have a genuine painpoint that they are willing to pay for. Some problems are not big enough problems, hence not high priorirty for the business users, these are not worth pursuing. Fine tune the problems till they hit * B>B>C: The business user/stakeholder may have rightly identified a problem but may not have the best ideas in terms of what solutions can work. Validate the problem is real with user research. * B2C: Similar to B>B>C, some segment of users have a need. Identify who they are and what the real problems are. 1.2 Is the Problem TAM big enough Step 2: Why us? Why now? 2.1 Do competitive studies 2.2 SWOT analysis 2.3 Are you best positioned to build this and build now?
...Read More7013 Views
Upcoming AMAs
Katherine Man
HubSpot Group Product Manager, CRM Platform • May 3
I’ve held roles as both platform and non-platform product managers and I’d say being a platform product manager is definitely the most challenging but rewarding. The most challenging part is your solutions are more abstract and less obvious. Instead of building solutions directly for customers, you’re buildings tools for customers to build the solutions themselves. Does your head hurt yet? Let me give an example. Let’s say you’re trying to let customers customize the way their HubSpot UI looks. While you could try to build all the customization requests you get, no two customers want the same thing and it’d be impossible for our product teams to keep up with that demand. Instead, you build tools for external developers and admin users to configure the UI in the way they need. But how do you figure out which tools? Here is the usual process for regular product management: 1. Collect customer use cases 2. Identify a pattern 3. Build a solution that solves for the majority of use cases. Here is the process for platform product management with an extra step: 1. Collect customer use cases 2. Identify a pattern 3. Identify a pattern across solutions 4. Build a solution that solves for the majority of use cases. Still confused? Let me make the customization example even more specific. Let’s say you notice that a lot of customers want to display their HubSpot data in a table format on the CRM record page. Taking a non-platform approach, you’d build out every single table request that customers make. But this isn’t scalable. Instead, you build a configurable table component that customers can populate with their own data and then display. Believe me, I struggled for a long time with this adjustment in thinking but I promise if you choose to pursue it, you’ll love the wider impact that you’re able to have on customers!
...Read More5247 Views
Reality these days is that we mostly work in remote settings, and even when we do go to the office, some people will be dialing in. As a result, I believe 80% of the strategies have to do with focusing on the fact that we are all people, 20% are tactics and adjustments for remote settings. General alignment strategies: * Build trust ahead of time. This is fundamental and driving collaboration without it is hard * Focus on common goals. There’s typically a higher goal that teams can easily align on (e.g. Revenue, Engagement, Better experience), and the differences show up as you start double clicking into the “how”. Starting the discussion with a longer term view can also help in skipping tactical disagreements and alignments * Frame, rather than take a position. With common goals in mind, center the discussion on what the characteristics of a good solution are, rather than starting with comparing options. This helps setting a more objective ground before jumping into the solutions * Call out your biases (easier to do when you have trust). In an environment where there is trust, I expect my teams to be able to call out other considerations that may cause them to pull in a certain direction, those can be different stakeholders that push in other directions, past experience and others. Some of those reasons may be valid, some may not be valid. Calling them out can help the entire team work through them. A few remote specific tactics: * Set the right structure, if possible. This includes minimizing the number of time zones each team has to work across (In my organization we are trying to limit ourselves to 2 time zones per team, when possible). If you can, hire senior enough people in the right locations to be able to run autonomously. * Invest in getting to a clear strategic direction. Having an upfront debate on the direction is time consuming, but can then help in setting the guardrails for autonomous decisions that can happen within the teams, locally. * If you do have the opportunity to meet in person, do so. Especially when working across time zones with little overlap, a good relationship would allow you to accomplish more offline, and can dedicate the overlapping time for working more effectively through the tougher topics. While I still mostly work from home I prioritize going to the office when team members from other offices are coming to town (and I am writing this note from the airport, while waiting for a flight - going to visit my team in Austin!)
...Read More12315 Views
Mani Fazeli
Shopify Director of Product • December 14
Let's cover this in two ways: (1) how to think about KPIs, (2) examples of poor ones and how they can be better. I'll also approach the question a little more broadly than Product Managers alone. Remember that Key Performance Indicators (KPIs) are used at all levels of a company (e.g. project, team, group, division, exec team) with different levels of fidelity and lag (e.g. daily active user vs. quarterly revenue). The appropriateness of standard KPIs will also differ by industry (e.g. commerce software will not rely on daily active users the way social networks do). Finally, many people use the term KPI when they actually just mean metrics (whether input, output, health, or otherwise). As the name suggests, only the metrics that are key to success should be elevated to KPIs, and there should be as few of them as possible. When I see more than 1 from a team, 3 from a group, or 5 from a division/exec team, there are good odds that some can be cut, amalgamated, or otherwise improved. KPIs are, after all, meant to drive decision making and accountability. So what are the criteria of KPIs that stand to be improved, and examples of them? 1. Vanity metrics: these look impressive but doesn't actually measure the success of a product. Examples include the amount of traffic to a website, the number of sign-ups a product has, daily active users for marketplaces that monetize through purchases, or the number of likes across posts on a social network. 2. Poorly instrumented metrics: these are not reliably measured, which can lead to incorrect or misleading conclusions about the effectiveness of a product. For example, if the first step of a conversion funnel (e.g. checkout) has many ingress pathways, and the user can transition in and out of that step before proceeding down funnel, how well your instrumentation deduplicates that first step is critical to your conversion calculations. 3. Lack of attribution to effort: any metric who's fluctuations cannot be explained by the combination of efforts from the team/group using it as a KPI, plus seasonal and random variability, is going to be ineffective. For example, if a common funnel in the company has multiple teams trying to improve its conversion, each team needs to define a KPI that does not overlap the others or they won't know if their efforts resulted in an outcome versus another team's efforts. Note that if all those teams are in the same group (e.g. a growth org), then that group could effectively use the conversion rate as their KPI. When in doubt, or if you're unable to isolate your efforts with lower level metrics, run an A/B test against every major change by each team to get a better (but imperfect) indication of relative contribution. This criteria covers many grey areas as well. Revenue is a prototypically difficult KPI for individual teams to use because of attribution. However, you can find relatively small teams or groups that build add-on products that are directly monetized and expansion revenue can be an excellent KPI for them (e.g. a payroll add-on to an accounting system). 4. Unclear tie to next level's KPI: companies are concentric circles of strategy, with each division, group, and team needing to fit its plans and focus into that of the prior. This includes KPIs, where you'd expect a well modeled connection between lower level KPIs driving higher level ones. For example, say a SaaS invoicing platforms sets an X in Y goal as an activiation hurdle to predict long term retained users (i.e. 2 invoices sent in first 30 days). It would be reasonable to assume that onboarding will heavily influence this. But what about onboarding, specifically, will matter? If a team concots a metric around how many settings are altered in the first 7 days (e.g. chose a template, added a logo, set automatic payment reminders) and wants to use that as their KPI, they'd need to have analyzed and modeled whether that matters at all to new users sending their first 2 invoices. 5. Lagging metrics at low levels: the closer you get down to a team level, the more you want to see KPIs defined by metrics that are leading indicators of success and can be measured without long time delays. Bad KPIs are ones that just can't be measured fast enough for a team to learn and take action. For example, many teams will work to increase retention in a company. But larger customers in SaaS may be on annual contracts. If features are being built to influence retention, it's better to find leading activity and usage metrics at the team level to drive behaviour and measure them weekly or monthly. These can tie into a higher level retention KPI for a group or division, and keep teams from getting nasty delayed surprises if their efforts weren't destined to be fruitful. The only caveat for this criteria is how platform and infrastructure teams measure themselves. Their KPIs are typically more lagging and this topic is deserving of its own write-up. 6. Compound or aggregate metrics: these are made up of multiple individual metrics that are combined using a formula in order to provide a more comprehensive view of the success of a product without needing to analyze many individual numbers. Examples include effectiveness scores, likelihood indicators, and satisfaction measures. Arguably, many high level KPIs behave this way, such as revenue and active users, which is why (3) above is important to keep in mind. However, its formulas that are particularly worrisome. They inject bias through how they're defined, which is hard for stakeholders to remember over time. You find yourself looking at a score that's gone down 5% QoQ and asking a series of questions to understand why. Then you realize it would have been simpler to look at individual metrics to begin with. In my experience, these KPIs lead to more harm than good. 7. Lacking health metrics or tripwires: having a KPI is important, but having it in isolation is dangerous. It's rare for lower level metrics to be improved without the possibility of doing harm elsewhere. For example, in commerce, we can make UX changes that increase the likelihood of conversion but decrease average order value or propensity for repeat purchases. Therefore, a KPI that does not consider tripwires or does not get paired with health metrics is waving a caution flag.
...Read More6721 Views
Ajay Waghray
Udemy Director of Product Management, Consumer Marketplace • August 25
I think the best way to break into the industry as a PM is to get after building tech products yourself. Personally, I left a well-paying job in the energy sector to work on a start-up with no reliable paycheck. Thinking back on that experience, it was crazy beneficial to learn how to work with designers & engineers to build a great product or feature. The act of building a product or feature is the best teacher. I’m not advocating that you should quit your job and not get paid to build stuff like I did! There was a lot that wasn’t so awesome about that. 😅 But I definitely WOULD encourage everyone here to think about how you could do that in your spare time. What problems are you passionate about solving? What kind of product or feature could help you solve that problem? How could you bring that solution to life? How can you talk to prospective customers about it? Even PM candidates that make wireframes or prototypes to show a product that solves a real problem have a leg up over most of the other candidates. I’ll take someone with drive, initiative and passion for the work 10 times out of 10.
...Read More5542 Views
Paresh Vakhariya
Atlassian Director of Product Management (Confluence) | Formerly PayPal, eBay, Intel, Verizon • June 22
Generally the process I follow to prioritize features is: * Aggregating feedback: from customers, users, and stakeholders through various avenues * Review User metrics to help identify pain points, feature requests etc. * Align feature prioritization with long term Vision/Strategy (This needs to be defined ahead of the prioritization exercise) * Assess the potential impact and value of each feature using factors such as customer metrics, market trends, competitive analysis, and alignment with company goals/OKR's/metrics. * Evaluate the effort required to develop each feature, considering factors such as development time, complexity, dependencies, and resource availability. * Prioritize using frameworks such as RICE prioritization framework (Reach, Impact, Confidence, Effort) to rank and prioritize features based on their importance, urgency, and potential impact. * Identify any dependencies between features and evaluate the implications of implementing them in a specific order. * Get feedback from key stakeholders on your prioritization * Continuously review and reassess the feature priorities based on all of the above.
...Read More1236 Views
Patrick Davis
Google Group Product Manager • August 18
Thank you for the question and I'm sure this is exactly not the answer you're looking for which is, "it depends" You're balancing building trust and relationships, understanding your users and the business, and likely an evolving company strategy. So the question you need to ask yourself is what are you optimizing for? The runway of your company is critical to consider, but I always lean towards how might we prioritize learnings and building trust to build out a strong product roadmap
...Read More3756 Views
Farheen Noorie
Grammarly Monetization Lead, Product • October 2
* Resume: Usually, the biggest red flag for me in a resume is when either the candidate doesn't describe any impact metrics or focuses on a vanity metric, eg, if a checkout PM describes an increase in the number of payment page views instead of an increase in checkout conversions * Initial Interview: PMs are often advised to use a variety of frameworks for their interviews primarily to showcase a PM's structured thinking to a problem. This is good advice as long as the PM uses a framework for reference instead of answering questions with the framework alone. As a hiring manager, I am more curious about how you thought through a problem, what real-world challenges you came across, and what the outcome was. The idea in an interview isn't to understand a PM's knowledge of the various frameworks available but to understand the depth and breadth of their thinking in a real-world use case.
...Read More493 Views
Navin Ganeshan
Amazon Head of Driver Products, Amazon Relay • May 31
This is definitely a popular topic of discussion amongst PMs, and probably a heated one at times. This is a good post that covers the most common including RICE, KANO and story-maps. https://roadmunk.com/guides/product-prioritization-techniques-product-managers/ Personally, I'm less dogmatic about the specific methodology than the discipline in using some framework, even it's as basic as attaching value-to-effort. Most seasoned PMs will concede that they always have to make tweaks or compromises to a standard framework to suit their team or company. So, I would not suggest looking for the best one, but one that works best for your team. Attaching Value-to-Effort, or using story-points are always a great place to start. The Kano model or MosCow model are similar and allow for a more nuanced approach that lets you distinguish between must-haves and nice-to-haves, and help you calibrate how much to invest in back-end scaling which may not be as noticable. But always take into account what stage of evolution your product is in, and the extent of data that have to make prioritiation decisions. Your approach is necessarily diffrerent when launching a new product vs evolving one that is several generations old.
...Read More2158 Views
Suhas Manangi
Snap Head of Product - Trust & Safety • June 7
Top 3 traits that makes a Good PM a Good AI PM: 1. Understand foundational ML tech concepts and having used them to make product decisions. For eg: Statistical Regression, Causation vs Corelation, AUC, P/R, Features vs Labels, Feature distribution, Model Training, Model drift and auto training, etc 2. Aware of potential bias and fiarness need in ML solutions they have launched in the past. Having used model observability and interpretability to explain the model output for their product corner cases. 3. Ability to scale up product decisions going from single global configuration, to customized per user segment, to fully scalled 'personalized product expereince for each and every user'.
...Read More2833 Views