
Deepti Pradeep
Director of Product Management, Growth, Adobe
Content
Adobe Director of Product Management, Growth • February 19
Governance / air-traffic control of AB tests is essential. Running parallel AB tests on the same KPI and same audience will not guarantee clean results. If improvements done by teams are direct to production, timed before an AB test, it may not be an issue as both control and variant receive the changes thereby retaining the validity of the results. Most testing platforms come with some inbuilt conflict resolution, but I highly recommend bringing teams together in targeted forums to not only ensure there is no test pollution but also, and most importantly, align learnings.
...Read More254 Views
Adobe Director of Product Management, Growth • February 19
1. Creative Self-Starter: Often there isn’t a set path / ready direction to get started. A strong Growth PM is able to roll up their sleeves and get into the growth tech stack and data, set direction despite ambiguity, build relationships through credibility, influence leaders, and restart with a new / different angle if required. 2. Patiently data-driven: A strong growth PM knows both their growth model as well as user research like the back of their hand, can build strong hypotheses informed by both and yet objectively questions everything they already know and let the data speak. Every experiment – success of fail – provides a unique window into user behavior and informs future iterations. Knowing that one cannot “will” an experiment into winning, and being patient but calculated is critical. 3. Relentless Rigor: A strong growth PM keeps every cross functional partner – product, design, data, engineering, operations, marketing, GTM, finance – proactively educated on roadmaps, learnings, impact estimates, forecasts by tuning their negotiations and narratives based on the partner / audience and always holding the bar for standards. A growth PM not known for their overall rigor will not stand the test of time across this matrix set of stakeholders.
...Read More253 Views
Adobe Director of Product Management, Growth • February 19
More often than not, an experiment fails. Failed experiments are the biggest sources of learning. Was your hypotheses proven wrong? Was the test design not pushing it far enough? Were there other data anomalies? Where there any positive signals in a subgroup? One needs to be careful in slicing the data by many cuts, as the validity of an AB test is largely dependent on its effect size. Cut it too much, and you might be cherry picking the data thereby losing its meaning for decision making, or worse you may lose yourself in a thousand cuts. I have found it best to align failed test reads directionally with a broader learning agenda and lean back on user testing and user research to come at the hypotheses in a few different ways. In fact, even if the test is a winner – I would still approach the hypotheses in different creative ways till you find paths that are optimal to users and the business. There is never an end to experimentation.
...Read More251 Views
Adobe Director of Product Management, Growth • February 19
PLG is technically effective when a product has achieved product market fit. Running experimentation when you are still finding out the right types of audiences is not really recommended. However, you can always imbue your early product with PLG concepts through freemium / trial experiences, frictionless signup and onboarding, and in-app messaging. Even though you may not be able to run statistically significant experiments with large effect sizes, you can still bring the ethos of that through constant feedback loops (private betas, in-app quick surveys, user testing) and hold out groups (e.g., run an experience to 90% of your users). Always keep your backlog of design ideas (the ones that might have been in the top considerations from a design standpoint) and maintain your learning agenda (what do you want to learn about your users and your product direction). And most importantly, be open to changing your own mindset, let the data speak. Just because you had invested a lot of time and effort to go with a certain product direction, doesn’t mean it’s the only way.
...Read More250 Views
Adobe Director of Product Management, Growth • February 19
We first absorbed all the relevant user research available, spent a ton of time with the core PMs to understand what had been tried – what worked well and failed. We also, in parallel, looked at data and business trends with the core PMs and assessed not only biggest areas of impact but also areas that potentially had low hanging fruit – all of which were aligned to our top user needs. Aligning the vectors of the growth team with that of the core product organization, leaning on the wealth of knowledge that already exists is critical to initial success. Our biggest challenges in navigating the shift: * Data instrumentation and proof of direction – It takes a bit to get the new KPIs off the ground, especially when it comes to engagement loops. What should the KPIs be (e.g., what is the setup/aha)? How are they correlated to business impact? How many KPIs do we prioritize this qtr.. this year? This takes a good few months to establish and prove out. * Overall dynamics – setting up the growth squads needed for success – the necessary number of engineers, designers, data scientists, PMs… especially in years when resourcing is not easy to come by is the hardest part. This requires constant discussions (read negotiations), operational efficiencies and ruthless prioritization.
...Read More240 Views
Adobe Director of Product Management, Growth • February 19
Expecting glory right off the gate. The thrill of launching a first test is huge. The disappointment from reality vs. expectations might be huger. Any number of things may go wrong in the beginning. Something may be off with the test setup (dang, that missed attribute), the test might fail gloriously (think big red stat sig negative), the test may be a disappointing win (whoaaa….wait, that’s it? I was hoping 10X impact…), or something totally out of one’s control derails the test (sorry, data outage mid-way through the test, need to run it again). First time (and not first time) growth PMs must instead focus on the bigger picture. Really understand your customers, the product, the market landscape. Be curious. Understand the opportunities (user pain points, drop off points in the experience, greenfield areas), the underlying growth loops, chart your unique learning agenda and then the roadmap needed in place to tap into all of these. Drive the roadmap diligently, bring back learnings effectively. With this, you will build your credibility - a currency much needed to then drive collective creativity and audacious ideas within your org.
...Read More239 Views
Adobe Director of Product Management, Growth • February 19

1. Executive sponsors – if your org leaders don’t think PLG is a priority, it’s not happening. Period. 2. Effective structure – Every growth PM on the team needs to have the right balance of focus (depth) and scope (breadth) 3. Functioning ecosystem – The more intuitive the AB testing and analytics platforms are, the more seamless the operations of your growth team is. The more you invest in the squad – engineers, data science, design, PMs, operations – the more effective the outcome/impact is.
...Read More234 Views
Credentials & Highlights
Director of Product Management, Growth at Adobe
Product Management AMA Contributor
Top 10 Product Management Contributor