Loading...
Loading...


Eric DeGrass
April 3rd, 2026
Your operational resilience is only as strong as the third-party software you depend on - and that software is changing fast. As vendors adopt AI-assisted development, they ship more code, more often, with less human oversight per line. The average enterprise loses $8.7 million per major outage. When the root cause turns out to be a known vendor defect that could have been caught before deployment, that is not bad luck - it is a process failure. AI does not inherently produce worse code - it widens the range of outcomes, and the stability of what your vendors deliver now depends on whether their engineering controls have kept pace with their accelerated output. Additionally, different AI models, versions of models, context window sizes, and available tooling are all variables that will drastically impact the grade of code created. For many vendors, they have not fine-tuned their AI coding practices - simply standing up a model subscription and giving it to developers will not result in a successful outcome.
Most enterprises have mature processes for managing security vulnerabilities (CVEs). Non-security defects are a blind spot. These include reliability failures, correctness bugs, performance regressions, and maintainability problems - the kinds of issues that cause expensive outages but never show up in a vulnerability feed. They are the silent drivers behind most service degradations, and they are precisely the “reasonably identifiable” ICT risks that the EU’s Digital Operational Resilience Act now requires financial entities to manage.
AI-assisted development is making this blind spot worse through three concrete mechanisms:
Code volume outpaces verification. AI lets developers produce more code, faster - but testing and review discipline become the bottleneck. Not every developer treats AI as a pair programmer; many simply accept generated output with minimal review, especially under deadline pressure. When QA does not scale with output, defects ship.
Code becomes more repetitive and brittle. Large-scale repository analyses show AI-assisted workflows produce significantly more duplicated code. Duplicated code means that when a defect is found, it must be fixed in multiple places - and when fixes are applied inconsistently (as they often are), you inherit compounding risk.
Review rigor varies wildly - and trust is declining. The Stack Overflow 2025 Developer Survey found that 46% of developers actively distrust the accuracy of AI-generated code, up from 31% in 2024. The most-cited frustration? AI solutions that are “almost right, but not quite” - reported by 66% of respondents. That “almost right” code is the most dangerous kind: it passes superficial review, compiles cleanly, and fails in production under edge conditions no one tested.
The data is no longer ambiguous. Multiple independent studies now quantify the operational risk that AI-assisted development introduces. If you are making upgrade and deployment decisions without accounting for these findings, you are flying blind. Three findings demand attention:
Throughput up, stability down. Google’s 2025 State of AI-Assisted Software Development report confirms that AI adoption boosts individual productivity while simultaneously degrading delivery stability. Their central finding: AI accelerates development, but that acceleration exposes weaknesses downstream. Without robust controls, faster output means faster defects reaching your environment.
AI-generated code produces 1.7x more defects. CodeRabbit’s State of AI vs. Human Code Generation report analyzed 470 real-world pull requests and found AI-generated code introduces 1.7x more issues overall, with 1.75x more logic and correctness errors, 1.4x more critical-severity bugs, and excessive I/O operations at 8x the human rate. These are not theoretical concerns - they are the exact defect categories that cause production outages.
Maintainability drift is compounding risk silently. GitClear’s 2025 analysis found rising code duplication and copy-paste patterns in AI-assisted codebases. Duplicated code is harder to diagnose when something breaks, more expensive to maintain, and creates a compounding tax on operational resilience that grows with every release cycle you do not address it.
You cannot dictate how your vendors build software. But you can sharpen how you evaluate, adopt, and operate what they deliver - and under regulations like the EU’s DORA, you may be obligated to do so. A pragmatic response combines vendor due diligence with internal change safety and defect risk intelligence.
Four actions that reduce near-term risk:
Interrogate vendor AI practices. Ask specifically where AI is used in their SDLC and what controls govern AI-generated changes. Look for mandatory test coverage requirements, peer review standards, and staged rollout policies - not vague assurances. Under DORA Article 28, financial entities must ensure ICT third-party agreements address risk management; AI-assisted development is now squarely part of that risk.
Treat every upgrade as an operational risk event. Even when a security patch forces your hand, separate the security imperative from the operational regression risk. A vendor shipping AI-generated code at higher velocity means each release carries wider outcome variability. Evaluate accordingly.
Strengthen rollout and rollback readiness. For high-blast-radius components, invest in staged deployments with automated canaries. When vendors ship faster - and with AI-generated code that may contain subtle logic errors passing superficial QA - your ability to detect regressions and roll back safely must keep pace.
Build version-aware inventory and defect tracking. If you cannot map what you have deployed to the known defects in those versions, every prioritization and deferral decision is guesswork. A current, accurate CMDB is the foundation - and under DORA, maintaining a register of ICT third-party arrangements is not optional.
BugZero exists to close the gap between vulnerability management and operational defect management. CVE feeds cover security. Nothing covers the non-security operational defects that cause most outages - until now. BugZero tracks these defects across dozens of enterprise vendors and makes that intelligence actionable inside the tools your teams already use.
Concretely, BugZero provides:
An Operational Defect Database (ODD) that consolidates non-security vendor defects across dozens of enterprise vendors into a single, normalized reference - so your risk discussions start with data, not anecdote.
Environment and version awareness that matches defects to what you have deployed, using your CMDB as the source of truth. Your team focuses only on defects relevant to your stack - not every bug ever announced.
Native ServiceNow integration that feeds defect intelligence directly into change planning, remediation scheduling, and audit workflows - eliminating the shadow spreadsheets and tribal knowledge that most teams rely on today. This also creates the documented audit trail that DORA and similar regulations require.
As vendors expand AI usage, the most durable resilience advantage will come from shortening the time between defect disclosure, relevance determination, and operational action. That is a workflow problem, not a research problem - and a current, accurate CMDB is what bridges the gap between knowing a defect exists and knowing whether it affects you.
This operational defect challenge now carries regulatory weight. The EU’s Digital Operational Resilience Act (DORA) - Regulation (EU) 2022/2554, enforceable since January 17, 2025 - requires financial entities and their ICT third-party service providers to manage technology risk comprehensively. That mandate explicitly covers “any reasonably identifiable” ICT risk, which includes the non-security operational defects that AI-assisted development is producing at higher volume and wider variability.
For any Fortune 500 with EU financial services exposure, DORA means that proactive identification and remediation of third-party operational defects is no longer a best practice - it is a compliance obligation, with daily fines of up to one percent of average daily worldwide turnover for up to six months. Similar frameworks are emerging globally: the UK’s FCA/PRA operational resilience requirements, Australia’s CPS 230, and Canada’s OSFI guidelines. The regulatory direction is clear - operational resilience is being held to the same standard as capital adequacy and data protection. BugZero’s structured defect intelligence and ServiceNow integration directly support the audit trail and evidence requirements these regulations demand.
Q: Does AI-assisted development automatically increase outages?
A: Not automatically - but the odds are moving in the wrong direction. AI can improve both speed and quality when best practices - training, usage guidelines, testing, review, methodical rollout, quality, and security controls - scale with it. The risk emerges when output grows faster than verification discipline - and the data shows that is happening broadly. CodeRabbit found 1.7x more defects in AI-generated code; Google’s research found AI adoption degrades delivery stability even as productivity rises. BugZero helps teams track the resulting non-security operational defects and match them to deployed versions, so upgrade decisions are evidence-based. BugZero’s technology will become increasingly relevant as AI-generated code works its way into software everywhere.
Q: What kinds of non-security defects should we expect to see more often?
A: Logic errors, correctness bugs, performance regressions, edge-case reliability failures, and maintainability drift (especially duplicated code paths). CodeRabbit specifically quantified these: 1.75x more logic errors, 8x more excessive I/O operations, and nearly 2x more concurrency and dependency issues in AI-generated code. These are exactly the defect categories BugZero tracks - and surfaces inside your existing ServiceNow workflows rather than leaving them buried in vendor knowledge bases.
Q: How should we adjust upgrade policies such as delayed adoption or N minus 1?
A: Make your policies component-specific and defect-informed. Delayed adoption still makes sense for high-blast-radius components, but faster vendor release cadences - accelerated further by AI-assisted development - put more pressure on the decision. BugZero links known operational defects to specific product versions and feeds that intelligence into ServiceNow remediation workflows, so you make upgrade decisions on evidence rather than gut instinct.
Q: What evidence should we request from vendors about AI use and quality controls?
A: Ask for specifics: where AI is used in their SDLC, what mandatory controls govern AI-generated changes (not all models produce equal quality - context window size, model capability, and prompting discipline all matter), how they prevent regressions (canary deployments, staged rollouts), and what their stability metrics look like over time. You are not debating tool choices - you are confirming their quality assurance scales with their throughput.
Q: How does BugZero integrate with existing IT operations processes?
A: BugZero integrates natively with ServiceNow. Known defects are matched to your configuration items, routed into remediation workflows, and tracked through change management - creating a durable audit trail from defect awareness through resolution. No parallel systems, no shadow spreadsheets. For organizations subject to DORA or similar regulations, this audit trail directly supports compliance evidence requirements.


Eric DeGrass
April 3rd, 2026


Nigel Moulton
February 11th, 2026
Sign up to receive a monthly email with stories and guidance on getting proactive with vendor risk
BugZero requires your corporate email address to provide you with updates and insights about the BugZero solution, Operational Defect Database (ODD), and other IT Operational Resilience matters. As fellow IT people, we hate spam too. We prioritize the security of your personal information and will only reach out only once a month with pertinent and valuable content.
You may unsubscribe from these communications at anytime. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, check out our Privacy Policy.

Eric DeGrass
January 21st, 2026