← All posts
· 5 min read ·
SecurityDevSecOpsCI/CDDevOps

Shift Smart, Not Just Shift Left: Rethinking Security Tooling in 2026

The Datadog DevSecOps 2026 Report shows 87% of organisations have exploitable vulnerabilities and dependency lag is getting worse. Shifting left alone is not working. Here is what shift smart looks like in practice.

Code monitor showing security metrics

The Datadog DevSecOps 2026 Report contains a finding that should give pause to everyone who has invested heavily in shift-left security tooling: 87% of organisations have at least one exploitable vulnerability in production, and the average dependency lag behind the latest major version is 278 days - up from 215 days the previous year. Java dependencies average 492 days behind.

These numbers are getting worse despite increased SAST adoption, more Dependabot alerts, and wider deployment of security scanning in CI pipelines. The problem is not a shortage of security tooling. The problem is that the tooling is generating more signal than engineering teams can act on.

The Alert Fatigue Data

The Datadog report’s most actionable finding: 80% of “critical” dependency vulnerability alerts are downgraded after context adjustment. Only 18% of flags that arrive labelled critical are actually critical once the analysis accounts for whether the vulnerable package is reachable from an internet-exposed code path, whether the vulnerable function is actually called, and whether the deployment environment provides compensating controls.

.NET is the most extreme case: 98% of critical findings are downgraded when runtime context is applied.

This has a direct consequence: engineering teams that receive a hundred “critical” alerts and find that 80+ of them are not actually critical learn to discount the critical label. When an alert that is genuinely critical arrives, it lands in the same queue as the 80 that were not. Shift-left tooling without context filtering is training developers to ignore security alerts.

What Context Actually Means

The shift-left framing puts security scanning as early in the pipeline as possible. That is correct. The missing piece is filtering findings by whether they matter in the deployment context.

Reachability. A vulnerable function in a library is only exploitable if there is a code path from an attacker-controlled input to that function. Static analysis tools that report vulnerability presence without reachability analysis produce high false-positive rates. CodeQL’s taint analysis and Semgrep’s taint mode perform reachability analysis; standard dependency scanners do not.

Deployment context. A vulnerability in a package that is used only in a developer tool that runs locally is different from the same vulnerability in a package deployed to an internet-facing service. GitHub’s production-context vulnerability filters (has:deployment, artifact-registry, runtime-risk) allow filtering alerts to findings where the vulnerable artifact is actually deployed to production and exposed to the internet.

# GitHub's production-context filter query
is:open severity:critical has:deployment is:internet-facing

Compensating controls. A SQL injection vulnerability in an application that only accepts connections from trusted internal IPs is different from the same vulnerability in a public API. Security tools that operate at the code level cannot see network-level controls. A human risk assessment step for high-severity findings enables this context.

Shift Smart: The Practical Framework

Shift smart means applying security intelligence at the stage where the signal-to-noise ratio is highest, not applying every tool at every stage.

Pre-commit (developer workstation): Fast, low-false-positive checks only. git-secrets for credential patterns. semgrep --config=auto with a short timeout. Anything that takes more than 10 seconds or has a high false-positive rate does not belong here - it trains developers to bypass pre-commit hooks.

CI gate (pull request): Full SAST with dataflow analysis (CodeQL, Semgrep with custom rules), dependency scanning with CVSS filtering (flag HIGH and CRITICAL, skip informational), secret scanning with push protection. These tools run without developer wait time and can have higher false-positive rates because findings go to a review queue rather than blocking immediate workflow.

Merge gate (pre-production): Production-context filtering. Filter open vulnerability alerts by has:deployment to identify findings that are actually present in the deployment artifact going to production. This is the stage where the “is this finding in code that actually runs?” question can be answered.

Continuous (production): Runtime vulnerability detection via AWS Inspector or equivalent for container workloads. GuardDuty Extended Threat Detection for multi-stage attack patterns. AWS Config rules for infrastructure misconfigurations. These catch what pre-deployment scanning misses: drift, runtime behaviour, and exploitation attempts against known vulnerabilities.

Policy-as-Code at the Right Gate

OPA/Rego admission control in Kubernetes is the pattern that places security policy at the highest-value enforcement point. Not pre-commit (too early - the developer is still exploring), not post-deploy (too late), but at the point of container admission to the cluster.

# OPA policy: reject containers that run as root
package kubernetes.admission

deny[msg] {
  input.request.kind.kind == "Pod"
  container := input.request.object.spec.containers[_]
  not container.securityContext.runAsNonRoot
  msg := sprintf("Container '%v' must set runAsNonRoot: true", [container.name])
}

This policy runs against every pod admission, regardless of how the container was built or which pipeline deployed it. It catches misconfigurations introduced by any path - manual kubectl apply, Helm chart updates, Argo CD sync. It is the right level of enforcement for a security property that matters at runtime.

The Budget Implication

The DevSecOps market is projected at $37B by 2035, growing at 14% CAGR from $10B in 2025. The investment narrative is shifting from “we need more scanners” to “we need fewer, better-contextualized alerts and faster developer feedback loops.”

That shift has a practical implication for tooling evaluations: the question to ask of any new security tool is not “what does it scan?” but “what is its false-positive rate at our codebase size and deployment context, and how does it integrate with the triage workflow we already have?”

A tool that adds 50 alerts per week at a 20% true-positive rate requires 40 hours of triage to find 10 actionable findings. A tool that adds 10 alerts per week at an 80% true-positive rate requires 2 hours of triage to find 8 actionable findings. The second tool finds almost the same number of real problems with one-twentieth the triage burden.

The 278-day average dependency lag is not caused by developers who do not care about security. It is caused by organisations where the security signal is too noisy to act on efficiently. Shift smart addresses the noise. Shift left addresses the timing. Both are required.

← All posts