Cogent Raises $42M Series A - Read more here

Product

Agentic AI

Customers

Resources

Company

Product

Agentic AI

Customers

Resources

Company

Product

Agentic AI

Customers

Resources

Company

Product

Agentic AI

Customers

Resources

Company

Mar 31, 2026

Cogent and Anthropic Partner to Bring Autonomous Cyber Defense to the Enterprise

Geng Sng, Co-founder & CTO

Security teams are drowning. Not in alerts (though that's true too) but in the time consuming, tedious work that happens between discovering a vulnerability and actually fixing it. Scanners find thousands of issues. Humans investigate, prioritize, assign, and remediate them one at a time. The math doesn't work, especially when attackers are automating their side with AI. 

Cogent was founded to close that gap. We build AI agents that automate the full vulnerability lifecycle, from investigation through remediation. Today, we're announcing a deepened engineering partnership with Anthropic, whose Claude models power the core reasoning layer inside our platform.


Why Claude

We evaluated every major foundation model against real security workflows. The deciding factor was sustained reasoning across long, sequential chains.

Our agents conduct multi-step investigations, chaining 10 to 15 sequential reasoning steps, querying vulnerability data, correlating asset context, mapping threat intelligence, and synthesizing findings. At each step, the agent decides what to examine next based on what it just learned. Claude maintained coherence and instruction-following across these chains where other models lost the thread.

That mattered more to us than any benchmark score. Security work demands precision and policy adherence across every step. A model that drifts mid-investigation isn't useful. 


What We've Built on Top

Claude is the reasoning engine. What surrounds it is decades of applied security engineering experience.

Claude operates across multiple layers of our platform. It powers the interactive investigation agent that security teams work with directly. It scores millions of vulnerability-asset combinations in batch. It enriches raw asset and vulnerability data with contextual analysis. And it generates remediation action plans tailored to specific systems and environments. Each of these is a distinct workload with its own engineering around it.

Cogent's platform wraps Claude inside a proprietary execution framework purpose-built for enterprise security environments. That includes guardrails and policy controls that constrain agent behavior within defined boundaries, auditability so every agent decision is explainable and traceable, and safe-action frameworks that let agents move toward autonomous remediation without creating new risk.

Early on, we invested in increasingly complex agent orchestration. As Claude's capabilities improved, we realized the better bet was the opposite direction: simpler agent loops operating inside a richer execution environment. Instead of more agents, we built a more capable world for one agent to operate in. That architectural bet paid off.

None of this works with a vanilla API call. The differentiation lives in how we've engineered the system around the model: the tool orchestration, the security-specific context management, the isolation controls, the integration depth. Another team dropping Claude into a basic workflow wouldn't get close to the same results.


A Partnership, Not Just a Vendor Relationship

We've worked closely with the Anthropic team throughout our development process. The collaboration has shaped how we architect agent workflows, how we think about safe autonomy in high-stakes environments, and how we push the boundaries of what these models can do inside production security operations.

Anthropic builds frontier reasoning models. We operationalize them inside the most demanding enterprise environments. The combination is where we think applied AI in security is actually headed.


What's Ahead

We're advancing toward a future where AI agents don't just recommend fixes. They execute them, safely, within policy, with human oversight where it matters. The foundation model capabilities are improving fast. Our job is to make sure the surrounding system is ready to let them do more, without introducing new risk.

If you’re battling an ever-growing vulnerability backlog, we’d love to show you Cogent in production.