Cogent Raises $42M Series A - Read more here

Product

Agentic AI

Customers

Resources

Company

Product

Agentic AI

Customers

Resources

Company

Product

Agentic AI

Customers

Resources

Company

Product

Agentic AI

Customers

Resources

Company

Feb 26, 2026

Claude Code Security + Cogent

Why “cutting the PR” is just the beginning of autonomous security

Geng Sng, CTO

Anthropic’s Claude Code Security is a strong signal: frontier AI labs are now shipping end-to-end security workflows where models don’t just flag issues. They reason about code and propose patches inside the developer experience.

At Cogent, we see this as a natural complement to our mission, not a competitive threat.

Here’s why we’re so excited about this release:

AI labs entering AppSec always made sense

AppSec has always lived where developers live: repos, IDEs, CI, PRs. So it’s natural that the organizations that own the dev toolchain (or the models embedded into it) become key players. When the “place where work gets done” gains a security brain, security shifts from a ticket queue to a workflow.

Claude Code Security is explicitly designed this way: it scans codebases, validates findings to reduce false positives, and suggests targeted patches for human review—in a limited research preview within Claude Code.

This is a big deal. But Application Security (finding and fixing bugs in source code) is just one slice of vulnerability management.

In practice, security teams are also responsible for:

  • Infrastructure CVEs: unpatched operating systems, databases, and network devices across thousands of hosts

  • Cloud misconfigurations: overly permissive IAM roles, public storage buckets, insecure network policies

  • Endpoint vulnerabilities: outdated software on workstations, missing patches across a fleet of laptops

  • Identity exposures: stale service accounts, excessive privileges, dormant admin credentials

Application Security is roughly a $20B market — meaningful, but still only a small fraction (~5%) of total global cybersecurity spend. Even if frontier AI dramatically compresses static code analysis, it doesn’t “replace cybersecurity.” It compresses one category within it.

What AI models are uniquely good at is reading code, configs, and structured definitions and reasoning about what's wrong. That makes today's code-scanning tools — which rely on rigid, rule-based pattern matching — especially susceptible to AI-driven cost compression.

But detection cost compression is not the same as remediation cost compression.

Finding a vulnerability faster (or even generating a patch) is not the same as actually fixing it. That patch still needs to survive code review, deploy across every affected environment, and be verified to have actually reduced exposure — a process that spans different teams, different systems, and different timelines. That gap between a proposed fix and confirmed closure is where most security programs stall.

The part that doesn’t end at the Pull Request: the execution gap

Once code leaves the PR and enters deployment, the problem only gets harder.

Most security programs don’t fail because they can’t find issues. They fail because they can’t close the loop across:

  • investigation (is this real? exploitable? reachable?)

  • prioritization (what matters now across thousands of findings?)

  • remediation orchestration (owners, SLAs, dependencies, change windows)

  • ticket triage + comms (Jira/ServiceNow, exceptions, risk acceptance)

  • validation (did the fix work? did we regress? did we reduce exposure?) — CI and test suites are great at code-level validation, but the unsolved part is cross-system: did the fix deploy everywhere it needed to, did exposure actually decrease, and did it stay fixed over time?

AI can reduce triage for code-fixable issues. But you still can’t merge 500 PRs at once. The real question becomes: which 50 ship first, when each carries deployment risk and needs human review?

And even that question only covers code. Vulnerabilities found and fixed in source are just the tip of the iceberg. The hardest prioritization question is: across the broader landscape of not only code vulnerabilities, but the tens of thousands of findings from cloud, infrastructure, and identity scanners — what reduces exposure right now?

That’s a context problem, not a code problem.

Key takeaway: detection is commoditizing; execution is the bottleneck

AI Models can propose patches in minutes. Enterprises still spend weeks coordinating owners, change windows, exceptions, and verification. As AI makes finding issues cheaper and faster, the constraint shifts to the one thing most tools don’t solve: reliably driving remediation to verified closure across messy, real-world systems.

Even in pure AppSec, "here's a patch" is rarely the finish line — there's still CI, cross-repo tracking, deployment verification, and audit evidence to deal with. And outside AppSec, the work often can't be solved by a Pull Request at all.

That’s why Cogent exists: to be the AI execution layer that captures context and takes action across the entire security lifecycle, moving from discovery to verified closure.

How Claude Code Security plugs into Cogent’s Agent Taskforce

Here’s the future we’re building toward:

Claude Code Security = an upstream source of high-quality findings and fixes

It excels at deep code reasoning, finding subtle issues, and producing patch proposals that are ready for human review. Think of it as high quality input into the remediation pipeline.

Cogent = the autonomous downstream execution layer that drives those fixes (and everything else) to closure

Cogent takes findings from Claude Code Security alongside signals from cloud, infrastructure, identity, and endpoint scanners — then contextualizes, prioritizes, and orchestrates the work:

  • Correlate across sources: connect a code finding to the actual server it runs on, who owns it, and whether it's internet-facing

  • Deduplicate: cluster related issues so one library upgrade closes 40 findings at once instead of generating 40 separate tickets

  • Prioritize with evidence: rank by reachability, internet exposure, blast radius, and SLA — not just CVSS score

  • Orchestrate end-to-end: route to the right team, track through assignment and follow-ups, and drive closure across Jira, ServiceNow, or whatever the org uses

  • Validate outcomes: confirm the patch was merged and deployed and exposure actually decreased, with audit-ready proof

So instead of “AI found a bug and opened a PR,” you get:

AI ran the full security operation like a true staff security engineer; asking a human only where judgment is required.

What AI labs entering security really means

Since Claude Code Security launched, we’ve seen a lot of takes. Here’s the framing we think is right: frontier AI didn’t “kill cybersecurity.” Instead, it validated that reasoning + action is becoming the new baseline, and that value shifts toward systems that can ingest many sources, prioritize with evidence, and execute actions with guardrails.

AI will likely commoditize large parts of legacy, rule-based security—especially workflows built on pattern matching and static review. Platforms stitched together through years of narrow acquisitions will have a harder time competing with AI-native systems built from first principles to reason, adapt, and act.

One more implication matters most:

Validation becomes the most important product surface

As agents take more actions, the differentiator won’t be “who can generate the most fixes.”

It will be:

  • Can you prove the finding is real?

  • Can you prove the fix reduces risk?

  • Can you prove the agent behaved safely and predictably?

  • Can humans supervise efficiently without becoming the bottleneck?

Claude Code Security emphasizes verification passes and human approval for patches.

Cogent extends that philosophy across the whole security stack: validation, guardrails, and an operator UX where security teams can train agents to behave like staff engineers: reliably, repeatedly, and auditably. Learn more about how we build safe agents here.

The emerging thesis: agents in the toolchain + agents in the operations layer

Security is splitting into two agent-native layers:

  1. Dev-layer agents (where code is written): find issues, propose patches, and speed up PR-based remediation.

  2. Ops-layer agents (where security is run): correlate signals, prioritize work, orchestrate remediation, validate outcomes, and drive closure across systems.

Dev-layer agents thrive on static reasoning. They sit inside the developer workflow and can generate strong fixes quickly.

Ops-layer agents live in the real world: unclear ownership, fragmented tools, deployment risk, change windows, and compliance constraints. This layer isn’t about spotting a misconfiguration. It’s about reducing exposure safely and verifiably across dynamic environments.

As AI expands into CSPM, infrastructure, and identity, the same pattern will hold — and the execution bottleneck is where durable value is created.

AI labs will lead the first layer because they’re already embedded in how software gets built. Cogent is building the second because security isn’t just code—enterprises need an execution fabric that can act across messy systems with guardrails and proof.

About Cogent

Cogent is an applied AI lab building agents to automate critical security tasks. There is an execution gap in cybersecurity between discovering vulnerabilities and actually fixing them.

As attackers increasingly automate the attack lifecycle, defenders remain constrained by fragmented tooling and manual coordination. Cogent’s mission is to become the AI line of defense for the world’s largest organizations — enabling autonomous, real-time security operations that move at machine speed.

Cogent helps Fortune 1000 enterprises automate vulnerability management across three core functions: investigation, prioritization, and remediation.

Join Us

If you're excited about building secure autonomous systems that can operate in mission-critical environments with provable, controlled autonomy—we're hiring.

We're looking for engineers who:

  • Care about both performance and correctness

  • Embrace incremental improvement over perfect designs

  • Want to work at the intersection of cybersecurity and agentic systems

Check out our careers page or reach out directly.