AMA: Gong Director of Product, Rapha Danilo on AI Product Management
April 27 @ 11:00AM PST
View AMA Answers
Rapha Danilo
Gong Director of Product Management • April 28
The same first principles actually still apply to AI PMs, but with an added dimension of complexity, which is that a generational paradigm/platform shift like AI requires a PM to re-think the benchmarks for what good looks like, consider the new types of outputs/inputs required, and the additional internal communication paths needed to achieve goals. Most similar to being an early PM in mobile 10 years ago, or the early web/internet even before that. What doesn't change: * (+) Focus on key outputs: * Focus on customers' jobs to be done * How well does our product solve them? What user behaviors should we see? * Measure the utilization and product KPIs/metrics that back up that these behaviors are really happening * Which business metrics (e.g. revenue, retention) will this impact * (-) Budget the required inputs: * How much time, resources and investment is required to achieve this? What's unique about AI product management. 1. Benchmarks move a lot faster: At the pace of change we're in, what was best-in-class a few months ago can be below-market today. These types of shifts in what "good" looks like usually happen over years, not weeks/months, making it harder to define success even if the success metric you are measuring against may not change. 2. Need to manage more complex inputs and outputs: You need to add more dimensions to your ROI analysis to account for the ML component compared to a traditional software product. See below examples. 3. Bigger need to break through the noise: With the ongoing AI frenzy, PMs need be very careful to not put too much weight on PR value of a new AI feature. You will get pressure to rush an AI feature, maybe from your leadership or the GTM teams, mostly to make noise around it and capture part of the hype. It's not necessarily a bad idea btw, but remember that ultimately you will need to build, maintain, and live with the outcomes of that AI feature. Examples of new inputs/outputs to consider: * Accuracy, reliability and speed of your ML models * e.g. What investment would it take to improve accuracy by X%? How well do our models perform in different languages? Are there labeling costs or MLOps/infrastructure investments we need to make? etc * Impact on users: How would this impact the ability for our customer to complete their jobs to be done in our product? * Impact on business: How will this show up in our product and business metrics? * Bias, trust and safety * How are we measuring for it? * What are the impact and risks for customers and their adoption of our product?
...Read More824 Views
1 request
Rapha Danilo
Gong Director of Product Management • April 28
Here's a few common mistakes I see in no particular order (many I've made myself): * Not talking to enough customers * IMO you should be talking to at least a couple of customers a week, and listening to more customer calls e.g. with Gong, #ShamelessPlug. But seriously, listen and talk to more customers, it's a gold mine you're sitting on and almost certainly not utilizing enough. * Focusing on user personas vs. jobs to be done. * This almost always results in a biased, solution-centric view of reality vs. why customers actually come to your product to solve their pain points. * Not focusing enough on workflows * Features don't get adopted or fail to retain users when the PM lacks a deep understanding of their users' existing workflows vs. new workflow that your feature/product would create. * Most PMs focus too much on features/functionality at the high level, and not enough on the tactical workflows i.e. how will this literally fit into the user's day-to-day habits. It's not sexy, but great PMs obsess about workflows, not just the theory of the features. * Not thinking like a mini-CEO / having low business acumen * Great PMs think like mini-CEOs: they tie their objectives to clear, measurable product and business metrics that move the needle for the company. They're mindful of the timelines and inputs required to get there. And they know how to steer and align the teams toward those goals. * In such an uncertain market, this is arguably one of the few levers that PMs can control to quantify the value of their contributions to their company and make themselves hard to replace. * Poor stakeholder management * One of the hard parts of being a PM is being able to ruthlessly prioritize, say no (especially to someone you want to say yes to), and steer the team in the right direction to achieve clear business and product goals the team is aligned on. * PMs need to balance demands from customers, sales, CS, engineering, leadership etc. * It's not easy, sometimes it'll feel like too much "politics", but it is an absolutely critical skill and will serve you in whatever you do next.
...Read More907 Views
1 request
Rapha Danilo
Gong Director of Product Management • January 19
The #1 trait I see the best companies look for now when hiring ANY senior product manager in an area where AI matters is a strong intuition and taste for how AI can be applied in a product. Which comes down to having 1. knowledge of the fundamental AI concepts and 2. strong product sense in order to properly gauge opportunity/risks. * Develop a strong intuition around how AI should and should not be applied in your product e.g. * Once your understand your user's JBTDs, Gen AI can be great to help your users quickly produce content first drafts, and summarize content stored in your app. e.g. summarize a meeting transcript, and generate an email follow-up draft to the customer. It's generally not a great tool for tasks like classification or tasks that require 100% accuracy. * Depending on the input/output, traditional ML models may be best, or different types of models e.g. NLP (text), or multimodal for a mix of text/images/audio/video. * Consider the fundamentally different ways to interact with AI features. Do your users prefer to interact with a dashboard for a certain task vs. a completely open-ended chat interface? Should your UI suggest and even limit what the user can ask the chat? Should your AI model output include a confidence score or be deterministic? * You may need different types of feedback loops in place for the model to learn and the feature to be useful (e.g. 'was this accurate?' button, real humans in the loop checking some results or labeling data, etc.) * One exercise to try is to observe your favorite AI features in products you use and try to reverse engineer how it was built and what the tradeoffs might have been. * Understand and have a POV on your key levers when building AI capabilities: data quality, AI model training, accuracy, safety, security, latency, and infra etc. * Data quality is a critical input to how your model learns. Garbage (data) in, garbage (model) out. * The bar for accuracy, safety, and security differs widely depending on how sensitive the data being handled is, and how critical the task/workflow your AI feature is applied to. * e.g. Consider if your product automatically accepts/rejects credit applications vs. if it helps designers draft alternative iterations of a UI based on their first draft. The stakes are different - and AI model hallucinations are a bug in the former use case, but basically a feature in the latter. * Lastly, latency sounds like an engineering problem, but it's a critical topic to consider for an AI feature to be feasible. * e.g. Imagine you had to wait 10 more seconds for a new, AI-powered Google search to run but the results were 20-50% more relevant. Most users would not switch to use it for most searches. But some users for some use cases might (e.g. see Perplexity.ai 's success).
...Read More481 Views
1 request
What are the different types of AI Product Managers. Are we going to see a role that's similar to a Technical Product Manager focused towards the data science team?
My question is more to understand if a PM needs to understand the AI concepts to be a successful AI PM?
Rapha Danilo
Gong Director of Product Management • April 28
I think at an even higher level we will actually see at least 3 different approaches to AI product management even in the organization structure itself (not necessarily mutually exclusive): * Dedicated AI PMs: Absolutely, more and more companies with dedicated AI product managers * Possibly with a further separation in focus areas between internal tools vs. customer-facing AI features * And/or different focus areas along the AI stack e.g. infra, MLOps, model-level performance vs. more customer-facing at the application level * Hybrid PMs: Other companies may try to integrate "AI product management" as a skill that you can train your PMs on and that they should all have. Rather than a focus area for certain PMs. * Empowering R&D: Other companies may give more ownership to R&D leaders and push them to be more business-centric in their thinking. Essentially taking on part of the role of an AI PM in other companies. Which approaches gain more popularity or not will have massive second-order effects on what skills PMs and R&D teams will need to hone to be competitive on the job market, which profiles become in higher demand, relative size and influence of those personas inside the organization etc.
...Read More861 Views
1 request
Rapha Danilo
Gong Director of Product Management • April 28
Taking a step back, I think the 1st PM needs to act a lot like a head of product in the early days. The ones I see that do well for their company (and themselves) typically focus on doing 3 things well, where others only do 1-2: 1. PM Execution * Typical activities you expect from a PM i.e. research, talking to customers, ideation, roadmap * This is foundational, but I think you will likely fail or at least get overlooked for leadership opportunities if you only spend time on this 2. Building the PM playbook * A key part of establishing the function is to determine the stage-appropriate processes, tools and people (hires) needed to achieve our goals * By owning the execution piece, you have an unfair advantage (and incentive) in shaping what the playbook should look like * If you can take most of this off your founders' plate, IMO you almost instantly distinguish yourself in the top performance quartile * You will also have greater control of your own destiny i.e. who joins the team, how prioritization decisions get made 3. Stakeholder management and education * Expect to spend 1/3 of your time (ideally not too much more) aligning with key stakeholders not just on what will make it in the roadmap and be shipped when, but also on how decisions should get made, and why there is a business justification for certain resources to achieve your goals. * In a startup, your leadership and key stakeholders probably have a limited understanding of PM best practices. In fact, they may have a (somewhat biased) view that a lot of it is unnecessary red tape that will slow down execution. Know when to push back and when to optimize for speed of execution and document learnings later. * I think the best founding PMs think like mini-CEOs: they scrutinize the business value of features (output) as well as the inputs (imagine this is your own $ or resources). They push back and say no when it's needed, and they're flexible enough to update the playbook and processes as teams grow. To answer your question more specifically, here's a simple sequence I'd use to prioritize my startup product roadmap from first principles: * Align on north star goals: * Where do we need to get the business metrics to be in the 6-12 months in order to raise our next round of funding? (or whatever the company north star is e.g. reach profitability) * As a result, where do we need the product metrics/behaviors to be in the 6-12 months in order to raise our next round of funding? (or whatever the company north star is e.g. reach profitability) * Work backward from north star goals to your current reality: * What are the 3-5 key jobs to be done that our customers come to our product to achieve? * How well does our product help customers complete these jobs to be done today? * What customer behaviors do we need to see in the product for us to know this is true? What product KPIs/metrics can we use to measure and back up that these behaviors are really happening? * Which business metrics (e.g. revenue, retention) will this impact? * Other first principles I'd think about: * Leverage the fact that your team is smalll to talk to more customers. You should become the expert on your customers 'jobs to be done' and current workflows. This will give you all the ammo you need when having to confidently decide, slice, and justify your roadmap. * >50% of the time, in the early days, doubling down on features or a product area that already works (i.e. has usage) will yield better results than shipping a shiny new feature. Communicate this accordingly to your stakeholders. * Be very clear about whether a feature is to play offense or defense (i.e. filling a gap with a competitive product.) There's nothing wrong with either, but a large lack of either signal you are either focusing too much or too little on the products you are evaluated against.
...Read More2851 Views
1 request
Rapha Danilo
Gong Director of Product Management • April 28
The first challenge is actually self-imposed by product teams. Ask yourself: are we implementing an ML feature mostly because of FOMO / not wanting to "fall behind", or because of a first-principled, customer-first assessment of need/opportunity? Assuming we've identified a real need/opportunity for ML in our product, we can ask what type of ML feature best solve the job to be done for our customers. IMO this is the PM's responsibility to own. It's important to involve and partner with R&D as early as possible to scope the technical challenges of different solutions, but it's unrealistic to expect them to own the step before i.e. identify and define the problem-solution set for the PM. Another aspect to this is I'd also ask whether the solution ought to be incremental or a fundamental/significant shift from our current set of features and workflows. You don't want to invest in building incremental ML features (and the engineering/architecture decisions that come with it) only to realize 6-12 months later that you needed a much deeper overhaul, or vice versa. For example, let's say your app provides a set of dashboards and search functionality to help users find insights in their data. A well-executed conversational interface powered by AI may make it an order of magnitude faster for a user to get the insights or answers they need compared to the status quo, but it'd require a significant overhaul of how users interact with your app today among other potential large decisions e.g. around infra. Championing such a large overhaul as a PM will almost always require a significant executive buy-in. But if there is a genuine business case to be made, and you throw your hat in the ring to help execute it, it could be a massive career accelerant for you.
...Read More766 Views
1 request