Sorting Through the Backlog
Handling Issues Effectively at Scale
Early Release: The following content is an early preview and should be considered a work in progress. Information may be missing or incomplete until the final release
If you’ve reached this point, you should now have a solid scope of how your application is built (code and build assets), functional security scans, and a prioritized list of security findings. Not bad. However you might mistakenly believe that the hard work is done. Quite the contrary, we are just getting started. Here’s where many security programs hit a wall that’s both predictable and avoidable.
Before you start handing out security issues to engineering teams for fixes, it’s important to review them for accuracy and actionability. Just because a security tool found an issue, and you prioritized it based on business context, doesn’t automatically mean it represents a real problem that needs immediate remediation.
This is where theory truly meets practice in application security. We shift from merely identifying issues to systematically resolving real problems with the agility and speed our organization demands.
The Reality Check of Scan Results
We need to take a minute and talk about an uncomfortable truth for a moment; prioritization and accuracy are two different things. When we prioritized our security issues, the focus was to try and surface findings based on the application scope and how the business views risk. Additionally, prioritization helps with deduplication and consolidation of the (findings) data itself. This however, is not the same as reviewing a given finding for technical accuracy and applicability.
Imagine the following scenario; during a routine scan of one of your most critical applications a high security finding is surfaced relating to cross-site scripting (XSS). The prioritization of this finding surfaces it to the top of the list due to its high severity and the fact that this is one of your most critical revenue generating applications. Plus the application contains customer data. You immediately raise this finding with the development team for remediation.
But when the development team investigates, they discover the application is using a custom sanitizer to ensure proper input and output sanitization. This is clearly a false positive. The prioritization system did its job correctly by identifying potential high-impact issues. But without accuracy validation, you’re asking development teams to spend time investigating and “fixing” problems that may not actually exist. This destroys trust, wastes resources, and ultimately makes your security program less effective.
Breaking Down Issue Management
To handle security issues effectively at scale, we need to group them into three distinct categories: Accepted Issues, Deferred Issues, and Ignored Issues. This structured approach allows for more efficient routing of issues, better allocation of resources, and clearer communication among teams about what actions are being taken and why.
This categorization system moves beyond simple binary decisions of “fix it” or “don’t fix it” to create nuanced workflows that reflect the reality of how security issues actually need to be handled in enterprise environments.
Category 1: Accepted Issues
Accepted issues represent genuine security vulnerabilities that require active remediation efforts. This category includes both straightforward true positives and findings that have been reclassified but remain valid security concerns.
True positives are security findings that represent real vulnerabilities in your environment. These are issues where your scanning tools correctly identified security issues that could potentially be used to compromise your applications or data.
Reclassified findings occur when the initial severity or risk assessment of a vulnerability changes based on additional analysis, but the underlying security issue remains valid. This reclassification typically happens when a finding is moved from one severity level to another based on a better understanding of the actual business impact, exploitability in your specific environment, or additional mitigation in place.
The key characteristic of accepted issues is that they represent real security issues that your organization has committed to addressing through code changes (human or AI), configuration updates, or other technical remediation efforts.
Category 2: Deferred Issues
Deferred issues are valid security concerns that need to be addressed, but not immediately. These findings are suppressed for a defined period of time based on business priorities, resource constraints, or technical dependencies that prevent immediate remediation.
This category enables strategic management of security backlogs by acknowledging that not every valid security issue can or should be addressed immediately, while ensuring that deferred issues don’t get lost or forgotten entirely.
Sprint-level deferrals might include medium-severity vulnerabilities (as an example) that the development team acknowledges as valid but chooses to address in the next sprint cycle rather than disrupting current development priorities. These issues typically have clear remediation paths but are being delayed for workflow management reasons.
Quarter-level deferrals often involve more complex vulnerabilities that require architectural changes, significant testing efforts, or coordination across multiple teams. These might include security issues that require major dependency upgrades, framework migrations, or changes to core application architecture.
Dependency-driven deferrals occur when fixing a security issue depends on other work being completed first. For example, a vulnerability in legacy code might be deferred until a planned refactoring project provides an opportunity to address the underlying architectural problems that created the security issue.
The critical aspect of deferred issues is that they include explicit timelines for revisiting the decision and clear criteria for when they should be promoted back to active remediation status. Deferred issues are also quickly becoming the target of AI software engineering efforts. With the ability to handle low-to-medium complexity tasks, deferred issues are a prime candidate for fully agentic workflows that can offload these often less pressing issues directly to AI (where possible).
Category 3: Ignored Issues
Ignored issues represent the most complex category because they encompass several different types of findings that don’t require active remediation, each for different reasons and with different implications for your security program.
(True) False positives are findings where security scans have flagged something as a vulnerability, but analysis reveals it’s not actually a valid finding. This might be due to framework-specific protections that the scanner doesn’t understand, compensating controls that mitigate the apparent vulnerability, or simply incorrect pattern matching by the scanning tool.
False positives in the ignored category should include clear documentation about why they’re not genuine security issues, both to prevent repeated manual analysis and to help improve scanning tool accuracy over time.
Acceptable risks are genuine vulnerabilities that your organization has explicitly decided to accept rather than remediate. These decisions are typically based on business context, cost-benefit analysis, or the presence of compensating controls that make the risk manageable.
For example, a low severity finding in an internal administrative interface might be classified as acceptable risk if access is restricted to trusted administrators on isolated networks, and the cost of fixing the vulnerability outweighs the residual risk given existing controls.
Technically infeasible fixes represent genuine vulnerabilities that can’t be practically remediated due to technical constraints, dependency limitations, or architectural factors. These might include security issues in third-party libraries where patches aren’t available, or vulnerabilities in legacy systems where changes would introduce unacceptable stability risks.
Workflow & Issues Alignment
Each category triggers different workflows and requires different types of documentation and follow-up actions. Understanding these distinctions is crucial for building automation that knows when to route issues to humans versus handling them programmatically.
Accepted issues flow into active development workflows, get assigned to appropriate teams, and are tracked through to resolution. They require regular status updates, progress tracking, and validation that fixes actually resolve the underlying security problems. These represent your primary remediation queue where issues move from discovery through fix implementation to verification.
Deferred issues need scheduled review processes to ensure they’re revisited at appropriate times and don’t become forgotten technical debt. They require clear documentation of why they’re deferred, when they should be reconsidered, and what conditions might trigger earlier remediation. Think of these as your strategic backlog that gets periodic reassessment based on changing priorities or circumstances.
Ignored issues require the most comprehensive documentation because the reasoning behind ignoring them needs to be preserved for future reference, compliance purposes, and continuous improvement of your security program. They also need periodic review to ensure that the reasons for ignoring them remain valid as your environment and threat landscape evolve. These suppressions and exceptions become institutional knowledge that informs both current operations and future tool tuning.
Maintaining Accountability Through Automation
Each categorization decision needs clear ownership, documentation, and audit trails that explain not just what decision was made, but who made it and why. This accountability is crucial for both operational effectiveness and compliance requirements.
When designing these automated workflows, it will be important to capture decision points as programmatic rules while preserving human oversight for genuinely complex cases. Your automation should document the same information a human reviewer would; the finding details, classification rationale, responsible parties, and approval chains where required. This creates audit trails that support both compliance needs and continuous improvement. Additionally, it creates a distinct dataset that can be used for historical knowledge of why certain issues were categorized in a particular way.
The goal isn’t to eliminate human involvement entirely. In fact, we need humans involved so we can interpret the data for audit and compliance reporting. However, when mapped out correctly the required time commitment on individuals can be significantly reduced. Additionally, exceptions might arise that span multiple applications, vulnerabilities with unusual characteristics that don’t match standard patterns, or issues that require architectural changes rather than simple code fixes…all of which would require human involvement.
Remember, automation is the end goal to help drive efficiency. However, at the beginning you’ll need to map out the workflows and gates by hand to determine if the flows actually work as intended. Once you’ve seen the results of these flows, then you can begin to build in automation (and ultimately pattern detection).
Building Intelligence Through Patterns
Regular review processes should examine whether issues in each category are being handled appropriately. Are deferred issues actually being addressed according to their planned timelines? Are ignored issues still appropriately classified given changes in your environment or threat landscape? Are accepted issues making progress toward resolution?
The patterns in how issues get categorized provide valuable intelligence about your security program’s effectiveness. High rates of false positives might indicate opportunities for better tool tuning or consideration of new tooling altogether. Large numbers of deferred issues might suggest resource allocation problems or unrealistic remediation timelines. Frequent reclassifications might indicate problems with initial risk assessment processes.
By treating issue categorization as a source of program intelligence rather than just an operational necessity, you can use this data to continuously improve both your security detection capabilities and your remediation processes. Each decision point in your workflow becomes an opportunity to learn and refine, whether through automated rule improvements or process adjustments. AI certainly has its place in the process itself, but is also very good at helping to analyze the patterns stemming from categorization too.
From Categorization to Action
The ultimate goal of effective issue categorization isn’t just processing more findings, it’s ensuring that the right security issues reach the right people at the right time with the right context. When automation amplifies human expertise rather than replacing it, your most experienced security professionals can focus on problems that genuinely require their skills while routine decisions happen automatically.
This transformation is almost certainly a requirement as organizations begin to embed AI-driven development in almost every facet of the developer experience. With security professionals being seriously outnumbered (vs the developers), this level of automation is the only way forward to prevent burnout and keep pace with an even faster threat landscape.
As the ability to categorize becomes more automated, development teams get actionable feedback on genuine security issues without being overwhelmed by noise and security teams can focus on strategic improvements and complex analysis rather than endless triage. Most importantly, your organization becomes more secure because real problems get addressed quickly while false positives and acceptable risks are handled automatically.