Back to Blog

How I Reduced Production Bugs by Testing Before a Single Line of Code Was Written

March 8, 2026

For the first year of my QA career, I tested features after they were built. A developer would finish a feature, pass it to me, and I would find bugs. The developer would fix them, pass it back, and I would find more bugs. This cycle sometimes repeated three or four times before a feature was ready to ship. Then I started asking a simple question at sprint planning: "What could go wrong with this feature?"

That one question changed everything. I started catching issues in the requirements before anyone wrote a line of code. Not after. Not during code review. At the very beginning. This is what shift-left testing actually looks like in practice.

What Shift-Left Actually Means

The "shift" refers to moving testing activities earlier (to the left) in the development timeline.

Traditional flow:

Requirements ➜ Design ➜ Development ➜ QA Testing ➜ Release

QA only touches the feature at the end. If there is a requirements issue, you discover it after the code is written. Expensive to fix. Shift-left flow:

Requirements + QA Review ➜ Design + QA Review ➜ Development + Unit Tests ➜ QA Testing ➜ Release

QA is involved at every stage. Most issues are caught before code exists.

What I Do at Sprint Planning

When a user story is presented, I ask these questions before anyone estimates the work:

Requirements completeness:

  • What happens when the user enters invalid data?
  • What is the expected behavior for edge cases (empty state, maximum values, concurrent users)?
  • Are there accessibility requirements?
  • Which browsers and devices need to support this?

Ambiguity check:

  • "The system should respond quickly." How quickly? Under 200ms? Under 2 seconds?
  • "Users can upload files." What types? What size limit? What happens when the limit is exceeded?
  • "The list should be sortable." By which fields? In which direction? What is the default?

Integration concerns:

  • Does this feature touch the payment system? The notification system? Any third-party API?
  • What happens if that external service is down?
  • Are there data migration requirements?

I write these questions down and bring them to sprint planning. Half the time, the product manager realizes the requirements are incomplete and updates them before development starts. That is a bug prevented. No code written. No debugging session. No hotfix. Just a question asked at the right time.

Reviewing PRs as a QA Engineer

I do not just test the deployed feature. I review the pull request. I am not checking code style or architecture. I am looking for testing concerns:

What I look for in a PR

  • Are there unit tests for the new logic?
  • Do the tests cover edge cases, not just happy paths?
  • Is user input validated before processing?
  • Are error responses meaningful (not generic 500s)?
  • Is there logging for important operations?
  • Are environment-specific values in config, not hardcoded?

I leave comments like:

"This endpoint accepts a quantity parameter but there is no validation. What happens if someone sends quantity: -5 or quantity: 999999999?"

"The error response returns a generic message. Can we include the validation error so the frontend can display it to the user?"

These comments prevent bugs from reaching my testing phase. The developer fixes them during development, where the cost of change is lowest.

Writing Test Cases From Requirements (Not From the Feature)

Most QA engineers write test cases after seeing the built feature. I write them before. When a user story is finalized, I create test cases immediately:

User Story: As a user, I want to reset my password
via email so I can regain access to my account.
 
Test Cases (written before development):
 
TC1: Valid email receives reset link within 60 seconds
TC2: Invalid email format shows validation error
TC3: Email not in system shows generic message
     (do not reveal whether email exists)
TC4: Reset link expires after 24 hours
TC5: Used reset link cannot be reused
TC6: Password must meet minimum requirements
     (8 chars, 1 uppercase, 1 number)
TC7: New password cannot be the same as current
TC8: Multiple reset requests invalidate previous links
TC9: Account is not locked during reset process
TC10: Reset works for accounts with 2FA enabled

When the developer picks up the story, they see my test cases attached. They know exactly what I will test. This changes how they write the code. They handle TC3 (not revealing email existence) because they saw it upfront, not because I found it as a bug later.

Building Quality Gates in CI/CD

Shift-left is not just about human processes. It is about automated gates that prevent issues from progressing.

What I advocate for in every project:

# Simplified CI pipeline with quality gates
 
stages:
  - lint        # Code style and static analysis
  - unit-test   # Developer-written unit tests
  - build       # Application compiles
  - api-test    # Postman/Newman API tests
  - deploy-staging
  - e2e-test    # Playwright end-to-end tests
  - deploy-production

Each stage is a gate. If unit tests fail, the code does not get built. If API tests fail, it does not deploy to staging. If end-to-end tests fail, it does not reach production. By the time I do manual exploratory testing on staging, the obvious bugs are already caught by automation. I focus my manual effort on the things machines miss: user experience, business logic edge cases, and the "this feels wrong" intuition that only comes from knowing the product.

Measuring the Impact

After six months of shift-left practices on one project, here is what changed:

  • Bugs found in testing dropped by roughly 40%. Not because I was testing less, but because fewer bugs made it to the testing phase.
  • Sprint velocity increased. Developers spent less time on bug fixes and rework.
  • Production incidents decreased. The bugs we caught early were often the ones that would have been the most expensive in production.
  • Team communication improved. QA and development were having conversations at sprint planning instead of arguing over bug severity at the end of the sprint.

The exact numbers vary by project and team. But the pattern is consistent: catching issues earlier is always cheaper than catching them later.

Getting Started

If you are a QA engineer who currently only tests after features are built, here is how to start shifting left:

Week 1: Attend sprint planning. Do not say anything yet. Just listen and write down questions you would ask if you could. Week 2: Bring your questions to the next planning session. Ask them. Watch how the team responds. Week 3: Start writing test cases from requirements before development begins. Share them with the developer picking up the story. Week 4: Ask to be added as a reviewer on pull requests. Focus your reviews on testing concerns, not code style.

You do not need permission to do most of this. You just need to show up earlier in the process than people expect a QA engineer to show up. The value becomes obvious fast.

HomeBlogLinkedInGitHubEmail