It's Time to Fix Stuff
Fully Agentic Remediation
The most ambitious approach to security backlog reduction involves using agentic workflows to remediate low and medium complexity vulnerabilities with minimal human intervention. This represents a fundamental shift from AI-assisted development to AI-autonomous development, where coding agents take complete responsibility for analyzing security findings, planning fixes, implementing solutions, and even running validation checks before submitting code for human review.
As an example, you can send security findings directly to systems like GitHub Copilot Workspace or similar agentic platforms. The agent picks up the request, analyzes the vulnerability, plans an appropriate fix, implements the remediation, and can even run additional security scans to validate that the fix doesn’t introduce new vulnerabilities before automatically creating a pull request for human review.
With this agentic approach, organizations can begin making substantial progress against security backlogs that have accumulated over months or years. Before AI agents, security teams and developers were often fighting a losing battle where new vulnerabilities were discovered faster than existing ones could be fixed.
The Autonomous Approach: Key Differences
Fully agentic remediation differs from AI-assisted development in two fundamental ways that represent both its power and its risks.
Complete Autonomy in Implementation
The remediation process is sent entirely to the coding agent, which has complete autonomy in deciding how to fix the issue. The only context provided is the security finding information—the vulnerability type, location, potential impact, and any relevant code snippets. The agent must analyze the surrounding code, understand the application architecture, determine the appropriate fix, and implement it without ongoing human guidance.
Humans only get involved at the end of the process to review the pull request before the fix is merged into the codebase. This represents a significant shift in responsibility, where the agent is trusted to make architectural and implementation decisions that previously required developer expertise and judgment.
This autonomy enables much faster remediation cycles, but it also means that agents might make decisions that differ from how human developers would approach the same problems. The agent might choose different implementation patterns, use different libraries, or make architectural decisions that are technically correct but inconsistent with existing codebase patterns.
Comprehensive Test Coverage Requirements
For agentic remediation to work safely, you need robust test coverage that can validate both the security fix and the continued functionality of the application. This isn’t just basic unit testing—you need comprehensive user acceptance testing, integration testing, and automated security validation that can catch breaking changes or regressions that the agent might introduce.
Without sufficient test coverage, agents have no reliable way to determine whether their code changes maintain application functionality while fixing the security issue. This creates significant risk of introducing bugs, breaking existing features, or creating new security vulnerabilities in the process of fixing old ones.
The test coverage requirements for agentic remediation are typically much higher than what most organizations currently maintain. You need tests that cover not just happy path scenarios, but edge cases, error conditions, and security boundary conditions that might be affected by vulnerability fixes.
The Promise and the Peril
Agentic remediation offers the potential to make dramatic progress against security backlogs that would otherwise take years to clear through manual remediation. The speed and consistency of AI agents can enable organizations to address entire classes of vulnerabilities systematically rather than fixing issues one by one.
However, this approach requires accepting significant changes in how software development and security remediation work. You’re essentially delegating architectural and implementation decisions to AI systems that may not fully understand your business logic, compliance requirements, or long-term technical strategy.
Organizational Readiness Challenges
Not every organization is ready to embrace fully agentic remediation, and the barriers go beyond just technical capabilities.
Balancing Control and Automation
Implementing agentic workflows requires finding the right balance between giving AI agents sufficient autonomy to be effective and maintaining appropriate human oversight and control. This balance is different for every organization and depends on factors like risk tolerance, regulatory requirements, and existing development practices.
Some organizations might be comfortable with agents handling routine dependency updates and simple code fixes, but want human oversight for changes that affect authentication, authorization, or business logic. Others might require human review for any changes to production code, regardless of complexity.
Audit Trail and Governance Challenges
Agentic remediation creates more complex audit trails because the decision-making process happens within AI systems rather than through documented human reasoning. When an agent decides to fix a SQL injection vulnerability by changing database query patterns, the audit trail needs to capture not just what was changed, but why the agent chose that particular approach over alternatives.
Many organizational processes for code review, change management, and compliance weren’t designed with AI decision-making in mind. Traditional approaches to accountability, code ownership, and technical debt management need to be adapted to work with autonomous agents that make implementation decisions.
Technology Maturity Considerations
Much of the technology enabling fully agentic remediation is still bleeding edge, with capabilities and limitations that are evolving rapidly. Organizations need to balance the potential benefits of early adoption with the risks of depending on technologies that may change significantly or prove unreliable in production environments.
The integration between agentic coding systems, security scanning tools, and existing development workflows often requires custom development and ongoing maintenance that may not be sustainable for all organizations.
Making Agentic Remediation Work
For organizations that decide to pursue agentic remediation, success depends on careful implementation that addresses both technical and organizational challenges.
Start with Low-Risk, High-Volume Issues
Begin with vulnerability types that are well-understood, have clear remediation patterns, and occur frequently in your codebase. Dependency updates, simple input validation fixes, and configuration corrections are often good candidates for agentic remediation because they follow predictable patterns and have clear success criteria.
Avoid starting with complex architectural changes, business logic modifications, or vulnerabilities that require deep understanding of application context until you’ve built confidence in your agentic workflows and test coverage.
Invest in Comprehensive Testing Infrastructure
Before implementing agentic remediation, ensure that your testing infrastructure can reliably catch regressions and validate both security fixes and functional correctness. This investment in testing pays dividends beyond just enabling agentic workflows—it improves the overall reliability and maintainability of your applications.
Establish Clear Boundaries and Escalation Paths
Define clear criteria for which types of vulnerabilities and code changes are appropriate for agentic remediation versus human implementation. Create escalation paths for cases where agents can’t successfully remediate issues or where their proposed fixes don’t pass automated validation.
These boundaries should be based on both technical complexity and business risk, ensuring that the most critical or complex changes still receive appropriate human oversight.
The Future of Security Remediation
Fully agentic remediation represents a significant evolution in how organizations can approach security backlog management. While not every organization is ready for this level of automation, the capabilities are advancing rapidly and the potential benefits are substantial for organizations that can implement them safely.
The key is approaching agentic remediation as a complement to, rather than a replacement for, human expertise in security and software development. The most effective implementations leverage AI agents for speed and consistency while maintaining human oversight for complex decisions and strategic direction.
As this technology matures and organizations develop better practices for AI-assisted development, agentic remediation may become a standard capability that enables security teams to keep pace with the evolving threat landscape while maintaining the development velocity that modern businesses require.