At what point is a solution ready to be shipped?
It's never going to be "as ready" as you want it to be. Because we don't work in a vacuum, the product is never built to 100% (I've never worked on anything where we had P2 requirements built into the product on launch day - it doesn't happen). There are three components that come together to ID when a product is ready to be shipped:
1) Business needs: If you run an ecommerce product, then your product needs to be out for Black Friday and the holiday rush. Similarly if you work in fitness, January 1 is going to be a big day. Aligning on these key dates and deadlines with stakeholders up front is important. At Quizlet, we focus on the Back to School season, which we define as August 1 through mid September. As a result, we'll sometimes know that something needs to be live on a given date — and then we'll work around that business need to scope the project and/or add more resources as needed.
2) User Jobs to be Done: I wrote in one of my other responses about being obsessed with problems. Once you're sure what the problem is, you can reframe it as a job to be done to focus. For Quizlet and the teacher products I worked on, this was "When I'm teaching my students, I want to keep them engaged and check for understanding, so that I can differentiate my teaching as needed." When you can take your MVP back to users and test it out, and verify that it's serving the job you've defined, you're ready to ship.
3) Eng readiness: Especially with launching a new product, ensuring that your engineering partners are ready is key. This includes asking questions like "What happens if we're wildly successfully and this product is used by XX people overnight" and "Can we test our logging to make sure these key actions are being logged" etc. Moving from MVP/beta to a GA launch means knowng that your product can scale up as it gains traction and that you'll be able to measure and evaluate its success!
It depends on the lifecycle of the product and goals.
- 0>1 product: Goal - to find product-market fit. Here its very important to think through your hypotheses, possible outcomes (Prove, Disprove, insufficient signals) and what you would do next - next set of features, V2 of the MVP and so on. Charting this out helps you then answer what you must absolutely answer to get to the next step. Once your product enables you to achieve testing of your hypothesis, ship it! This assumes that product/solution is usable and stable (not overtly buggy etc). Also in general, start with a smaller userbase, tweak and polish before going full bright.
- Consumer 0>1: IMO, consumers have high expectations and they WILL compare your product with way more mature products. So its more important to pay attention to usability and product excellence (latency, bugginess, low connectivity coverage etc.) for consumer products. Its easier though to experiment relative to consumer products.
- Enterprise 0>1: Here offline/hacky support solutions can work initially to help you support operationally without building all the bells and whistles. So focus even more heavily on what problems are worth solving and are the solution ideas resonating. Remember though that for B>B>C, the enterprise might think the solution works, but their users may not think the same. So user research is still no substitute to actual launch data.
2. 1> 100 product. Goal- growth in engagement userbase, retention, monetization etc. I think its good enough to ship when the solution fixes a known issue or gap even incrementally. Iterate depending on the effort vs impact equation. This fix could help retain some existing users, so it still matters. If its a novel feature, then follow the 0>1 principles in #1.#1.
For early stage products, a feature or solution is usually ready to ship when it meets the functionality for the main user journey(s) i.e. "golden paths" defined in your PRD or user journey maps, and passes the predefined usability bar from a QA perspective – user can complete the common tasks within expected reliability and performance metrics, all expectation cases may not have been hardened yet.
In basic terms, a user should be able to use the solution to complete the predefined task in a reasonable manner, and be able to provide feedback that can be used to enhance the product.
For example, if the product keeps crashing in the middle of the user task journey, then that leads to a bad user experience such that the user can’t really provide effective functionality feedback for you to evolve the product, and they may never adopt the product at all.
Several factors come into play in deciding this:
-
Product target market: B2B / B2C / Enterprise
Striking the right balance between customisation, integrations along with data security and legal compliance are key for B2B, Enterprise markets. However for Consumer markets, a mass market appeal driven by modern UX is often desirable.
-
Product maturity phase: 0 > 1 / Maintain and growth phase / Market leader
For a 0>1 market, the focus is always on validating the base hypothesis on product market fit, building a functional proof of concept would be good enough. However the focus would often shifts to scale, data security with continued market differentiation as the product matures.
Immediate goals, focus areas along with key stakeholder expectations: Acquire customers, grow userbase / Get profitable / Customer satisfaction & delight / Competitive differentiation (first to market?) / Riding the current market trend
Effort vs impact it can deliver
Ultimately, the decision to ship a product or solution is often made by considering a combination of these factors, in addition to the specific requirements and constraints of the project and stakeholders involved. It's crucial to strike a balance between achieving a high level of readiness and meeting the desired timeline for delivery.
You should think of it as: it should be ready to be shipped when it's the first shittiest version, and when it's the best version of itself. The question should not be not so much about readiness of the solution, but readiness for whom?
I've witnessed teams who got too excited when creating a new product and opening the floodgates to let anyone try it too early. That usually doesn't end well... You don't want many people jumping on the solution when it's not ready for them: first impressions won't be good, they won’t stick around and you'll need to work hard to get them to try again.
Here's the approach we take in my teams: basically, for anything we ship, whether that's big new features to Jira Product Discovery or new products (including Jira Product Discovery itself), we work with progressively more customers (10-100-1000) before making it generally available. It helps us test the solution very early with a handful of customers to get feedback and make sure the solution delivers on its promise for them, and ensures that everyone only gets the solution when it's ready for them - thereby increasing retention & minimizing churn.
0->10 customers (preview)
In the first phase of early stage bets we only work with max 10 customers. We unpack the problem and test solutions and iterate fast, which is a process that’s best done with a small number of users/customers that feel the problem the most. And we get the solution right for them. It’s easier and much more focused to do it this way than “throw it out there and see what sticks”. Users who feel the pain the most will be happy to work with us, we can chat with them on Slack/Zoom/Email easily. They're happy to work with incomplete solutions (we can take a LOT of shortcuts) and we can remove the need to pile on untested assumptions. If the solution doesn't work we can throw it away, it's cheap, no harm done.
To demonstrate progress to leadership we use metrics that best represent what we're trying to prove at each phase. We started with the following for the private preview of Jira Product Discovery:
10 active teams have been using the product for more than 3 months and plan to continue using it when we enter beta
10->100 customers (alpha)
Then we progressively invite more customers to use the solution. At this stage we usually need to polish rough edges and address more scenarios based on what we learn from working with customers of varying needs, and maybe less willing to work with a rough prototype. It's also harder to collaborate with every customer 1-on-1 so we need to create better onboarding material, demo videos, etc. We move support to a community forum.
Our success metric at that stage when creating Jira Product Discovery was still focused on problem-solution fit, but for more customers:
Product market fit score of 40% or greater with 100 active teams
(I highly recommend the Product market fit score survey, and you can read all about it here)
100->1000 customers (beta)
At some point we become ready to share with more users, in our case for Jira Product Discovery it was when we reached 50% PMF score for more than 100 active teams. At that stage we needed to focus on making the solution fully self-service - we couldn't afford to have to talk to every customer once in their journey. So we focused on in-app onboarding, we improved usability, we polished the design, we scaled the technical implementation, we trained support and sales teams, etc. It's really about making it ready for prime time.
It was also time to change how we measure success by introducing more metrics that represent the health of the funnel as more customers discover, try and adopt the solution on their own. That’s when we started adopting pirate metrics (AARRR), and here's a great read about them.
In the early days of that stage we also focused on validating a pricing model, but that would be a topic for another post.
Once we were done with all that, had high PMF score for 1000 customers, healthy conversion rates and retention, low churn, validated pricing - that's when we decided to make Jira Product Discovery generally available.
These phases work for us - it doesn't mean they will work for you, but I believe that this general approach should apply to a lot of contexts. Basically: don't focus on "when the solution is ready to be shipped?" but "who should we be working with right now based on the state of readiness and validation of the solution?"
I dont think its possible to ship a 100% ready product. At some point, youll have to deal with V2 of the product that you thought V1 was just ideal. Thats fine, its how it goes.
Now, the product is ready to be out assuming you have validated your UVP and some features have been used by some potential users in your MVP stage.
Now, its time for stability and scalability test.
- Bug Threshold: Your product must be stable enough not to ruin the user experience. Major bugs that affect the product’s core functionality should be fixed. That said, it's okay if non-critical bugs are present.
- How to Check: Do QA and performance testing with zero critical bugs and under 10 -15% minor bug occurrences.
And to be extra safe, have a plan readied for post-launch bug fixes by having your QA team be on alert and resolution time is high.
All of this is tied to your user map journey. You want your users to go from a > b > c as you envisioned on your map without any hiccups, downtime, operational inefficacy, etc.
During this onboarding process and product usability period, you should be able to receive feedback from those users as easily as possible because those users will give you a lot of free usability data for you to iterate back on the product (or V2 of the MVP).
If you are dealing with consumers, then UX and messaging play a key role here because they will compare you with others for sure.
If youre dealing with enterprise, then things like data security, integration, API, smooth operation, etc are also key things to pay attention to.