Proactive Product Quality - How to Prevent Issues Before Users Even Notice
It's one of those product topics people don't write about enough - how to proactively problem solve. Let's cover all the key techniques you should be using to ensure product quality.
I'll never forget the day our team launched a major feature on Fortnite.
Everything looked perfect in testing. Then reality hit. And the influencer videos started to roll in: people were ready to quit.
Within hours, my slack was blowing up from all directions: competitive, community, and, of course execs.
What should have been a celebration became a crisis – one that fundamentally changed how I approach product quality.
It taught me that catching issues before users notice isn't just about good QA – it's about weaving quality into every fiber of your product development process.
Today’s post is about how.
Brought To You By LogRocket
LogRocket is the AI-first session replay and analytics platform that proactively identifies opportunities to improve your websites and mobile apps. Its Galileo AI is your 24/7 product analyst, watching every user session and developing a human-like understanding of the user experience and struggle.
LogRocket helps leading PLG companies build better products by surfacing the biggest opportunities to improve digital experience. And thanks to its Galileo AI, you don’t need to manually watch hundreds of sessions or wait for users to complain to find the areas ripe for optimization.
The Hidden Tax of Reactive Problem-Solving
Most product teams are firefighters.
They're great at responding to emergencies. But prevention often takes a backseat to shipping features.
I used to lead this way too:
Our team prided itself on quick fixes and rapid response times
We celebrated our ability to solve problems fast
But here's what we didn't see: Every emergency fix was costing us more than just time
IBM's research confirms something I learned over time – fixing an issue in production costs 100 times more than addressing it during design:
The real tragedy? The reactive approach slowly erodes user trust.
I've watched a single bad experience undo months of careful relationship building. One bug can make users question everything about your product.
Last year, I advised a fintech startup that learned this lesson the hard way.
A minor payment processing glitch affected only 0.1% of transactions. But those affected users? They became vocal critics, leading to a 15% drop in new user signups.
This experience prompted me to write today’s post.
Today’s Post
This is the guide I wish I had to proactively problem-solve as a PM or product leader:
How to Build Quality In By Stage of the Product Lifecycle
How and When to Run Pre-Mortems
Measuring Success in Prevention
1. Build Quality Into Your Product Development Lifecycle
I want you to think of proactive problem-solving at every step of your product process.
Really, when we say, “solve problems before users notice them,” we mean, “do all the fundamentals of product right with an eye towards quality.”
Here’s what I mean:
Phase 1 - Problem Space Investigation and Definition
Even before planning begins, you need to examine your day-in and day-out “planning before the planning. “
This starts from how you identify problems:
Do you look at session replays of users?
Do you have your product features to see needed analytics?
If you have a rigorous process of surfacing quality problems all the time with things like:
Updates from customer support and customer success
Regular chats with customers
Deep user research
Then you’re much more equipped to bring quality problems to planning.
I’ve noticed a night and day from different product cultures I’ve been on:
→ When we were getting all that information at Google, quality was high
→ When I wasn’t getting all that information at Rap to beats, quality deteriorated
Phase 2 - Planning
When it finally comes time for planning, you have to make time for quality.
So what do you need to practically?
INPUTS: You need to estimate and collect information like bug reports, customer service interactions, and potential revenue impact. This gives you inputs in a form that might get prioritized.
PRIORITIZATION: You should plan to have some percentage of your allocation for tech debt right? Do the same for quality iterations and bug fixes. (I often thought of 20% as a rule of thumb when I was a VP.)
OUTPUTS: You need to consider having entire quarters, or portions of your roadmap, just focused on quality. (More on that in a second.)
The biggest mistake most teams make is: they don’t plan for problem-solving features after launching.
They assume that everything they ship will be right the first time.
And this leads to the scenario where product teams keep moving onto newer, bigger features but create feature bloat.
You need to plan for a few days after launch to look at the session replays, get the feedback, and quickly iterate.
But it's not just about those process steps.
What I learned over time is that it's also about bringing the right perspectives into the room early. Involve the entire product team in early planning discussions.
And ask each role to speak up!
Engineers should talk technical debt and scaling issues
Designers should identify user experience breaks
Analytics to surface potential flows that aren’t doing as well as they used to
User Research should talk about potentially overlooked user problems
When you get all those voices early in the planning process, and ask them the right questions, you prioritize the right things.
When Quality is Everything
I was just chatting with Christian Marek at Productboard (for an upcoming deep dive). What had the team just completed? A full planning process with one focus: quality.
I’ve heard many similar tales of quarters at Apple from employees there. Craig Federeghi finds apps buggy, and they spend the whole quarter on quality.
Point being: the best companies do this.
You can sometimes focus a whole quarter just on quality. It just depends on the time. With the inputs you’ve collected, is it warranted?
Phase 3 - The Design Phase
The design phase is where the majority of heavy-lifting for preventing problems happens.
There are three critical moments here:
Before finalizing requirements: have everyone review the PRD and look for unclear requirements or edge cases you need to measure
During early design reviews: even at the wireframe stage, have leaders and everyone look for areas that users could have issues
Before beginning full development: consider conducting a pre-mortem, we’ll say more on that
And when features are big enough: invest in prototype testing!
Too many teams don’t do enough of this. Yes, it takes time. Yes, it might delay your launch. But catching a critical issue in prototype saves weeks of emergency fixes later.
Phase 4 - Development Phase
Once you get to the development phase, the corny phrase is true: “quality isn't a phase – it's a mindset.”
At Epic Games, we transformed our quality practices through a simple change: making quality everyone's responsibility, not just QA's.
Pre-launch, you need to get in the habit of user testing.
Everyone wants to rush to production. But putting a step in there pays back in the long run.
At Epic Games, we became religious about getting in-person playtests of big new features. This practice revolutionized our proactive problem-solving.
Many software companies could learn from that too. Test before launching.
Phase 5 - Launch
Then, when you finally launch… The 24 hours immediately after launch are make-or-break.
This is where you can prevent minor issues from becoming major crises.
I learned this lesson the hard way at my first startup. We launched a feature on a Friday afternoon (rookie mistake) and went home. By Monday, we had a full-blown crisis.
Now, I treat the first 24 hours after launch like a space mission. This mission has three critical elements:
We create a dedicated war room
The entire product trio watches session replays in real-time
Engineers stand ready to fix issues immediately
This approach has saved us countless times.
Just last year at Apollo, we caught a subtle UI bug affecting 2% of users within the first hour of launch. Fixed it before most users even noticed.
Guillaume Moubeche told a great story of how important this process is on the podcast:
It led to a huge activation win for Lempire, and it could for you too.
Phase 6 - Post-Launch Monitoring
The work isn't over when your feature goes live and you’ve made all the quick fixes. (Boy, wouldn’t that be nice?)
Instead, you need to get your monitoring in order to find breakage.
In particular, for big features, set up analytics alerts for broken user pathways. These often break silently over time as other features change. At Affirm, I learned to monitor not just errors, but also unusual patterns in user behavior.
And make sure you have your bug assignment and solving down pat!
Here's what worked for us at at Epic:
Direct line from customer support to engineering
Daily bug triage meetings
Clear severity classification system
Response time standards based on impact
Adjust those principles to your industry - but don’t neglect your monitoring and bug fixing.
It’s the heartbeat of proactive problem-solving.
Normally there would be a paywall here. But thanks to LogRocket, this entire deep dive is free for all subscribers.
Reach out to their team to start a free 14 day trial of LogRocket and their Galileo AI.
2. How and When to Run Pre-Mortems
Let me take you back to my days at Epic Games.
We were about to launch a massive new feature set for competitive play. Everything looked perfect on paper. The team was confident. Leadership was excited.
Then someone asked: "What could go wrong?"
That simple question sparked our first real pre-mortem session. And thank goodness it did – we uncovered three critical failure points that would have derailed the entire launch.
That day taught me the power of structured pessimism. Sometimes you need to imagine failure to prevent it.
But here's the thing: most teams either skip pre-mortems entirely or run them so poorly they might as well not bother. Let's fix that.
What Makes a Great Pre-Mortem?
A pre-mortem isn't just a brainstorming session with a fancy name. It's a strategic exercise where teams imagine a future failure and work backwards to prevent it.
The magic comes from four key principles:
You have to assume failure has occurred. This isn't about "what might go wrong" – it's about "what did go wrong."
Think beyond the obvious issues. At Google, our best pre-mortems uncovered problems nobody had considered during regular planning.
Encourage radical candor. I've watched junior engineers spot fatal flaws that senior leaders missed – but only when they felt safe speaking up.
Focus on prevention. A pre-mortem without action items is just anxiety theater.
Running Your First Pre-Mortem
Here's the exact process I've refined over dozens of launches:
Step 1: Setup (15 Minutes)
Start by gathering your cross-functional team. This isn't just for engineers – you need design, product, analytics, everyone. Then establish psychological safety. Make it clear that pessimism is not just allowed but encouraged.
Step 2: Individual Reflection (10 Minutes)
Give everyone quiet time to write down potential failure modes. The key here? Be specific and detailed. "Users won't like it" isn't helpful. "Power users will abandon the product because the new workflow adds three extra clicks" – that's what we're looking for.
Step 3: Team Share (20 Minutes)
This is where the magic happens. Use round-robin sharing, and enforce a strict "no interruptions" rule. I've seen too many pre-mortems derailed by defensive reactions.
Step 4: Analysis (30 Minutes)
Now group similar issues and prioritize by impact. Look for patterns and identify root causes.
Step 5: Prevention Planning (30 Minutes)
Finally, develop specific mitigation strategies and assign owners. Create early warning systems – the smoke detectors that tell you something's wrong before there's a fire.
Remember: The goal isn't to predict every possible problem. It's to catch the big ones before they catch you.
So next time you're getting ready for a big launch, take a step back and imagine its failure. It might just be the thing that ensures its success.
3. Measuring Success in Prevention
I'll never forget that quarterly review at Google when an exec asked: "How do you know your prevention efforts are working?"
The room went quiet. We had anecdotes. We had gut feel. But we didn't have a robust way to prove our prevention efforts were worth the investment.
That moment sparked a journey that would transform how I approach quality measurement. What I discovered might surprise you.
The Prevention Paradox
Prevention creates a fascinating measurement challenge: the better you are at it, the less there is to measure. It's like trying to quantify the value of a vaccine – you're measuring the absence of something.
At Epic Games, we solved this by focusing on what I call the "prevention ecosystem" – the whole environment that makes prevention possible. Here are the key metrics we found actually matter:
Prevention Success Metrics
Leading Indicators
Code review thoroughness scores
Pre-mortem completion rate
Design review participation
Time spent in user testing
User Impact
Issues caught in testing vs. production
User-reported issues per 1000 DAU
Feature adoption rates
Support ticket volume
Financial Impact
Engineering hours saved
Customer support cost reduction
Churn prevention value
Beyond the Numbers
But here's what really matters: measuring prevention isn't about perfect metrics. It's about meaningful improvement in your team's ability to catch issues before they impact users.
Perhaps the best measure of prevention success isn't in any metric.
It's in the crises that never happened, and the team that got to focus on building great features instead of fighting fires. That ultimately drivers retention, what your product needs.
Sponsored by LogRocket:
LogRocket helps companies like Appfire, Jasper, and Ramp proactively identify opportunities to improve their digital experience. Over 3,000 customers trust LogRocket to boost critical growth metrics like engagement, conversion, and adoption.
Final Words
Shifting from reactive to proactive isn't easy. I know – I've led this transformation at three different companies.
Start small:
Add pre-mortems to your next feature planning session
Watch launch-day session replays as a team
Set up basic analytics alerts for core user paths
Create a clear channel for bug reporting and triage
Reserve time in your next sprint for quality work
Remember: The best product problems are the ones your users never experience.
Email productgrowthppp at gmail dot com if you want to buy-out a newsletter.
Up Next
I hope you are enjoying the latest schedule of 1 job searching piece and 1 career skills piece a week! Last week, that was:
How to Write a Killer AI Product Manager Resume (With Examples)
How to Measure Onboarding: Advanced Topics in Activation Metrics
Up next, we have:
The Google PM Interview Guide
The Product Leader’s Ultimate Guide to Process Changes
Look forward to sharing with you!
Aakash