Product

Agentic AI

Customers

Resources

Company

Product

Agentic AI

Customers

Resources

Company

Product

Agentic AI

Customers

Resources

Company

Product

Agentic AI

Customers

Resources

Company

May 7, 2026

Announcing Cogent Research

Geng Sng, Co-founder & CTO

We started Cogent because we saw a structural problem in how cybersecurity gets done. Teams are drowning in findings, stitching together context from dozens of systems, spending most of their time on work that should be automated. Attackers, meanwhile, are using AI to scale every step of compromise. The capability curve is accelerating. Defense cannot stay manual and reactive while offense automates.

We have passed the point where AI is needed to defend against AI. The step-function improvements required for defenders to keep pace need dedicated, sustained research.

Today we're launching Cogent Research, an applied AI research lab focused on the hardest unsolved problems in AI-driven security.



The gap we're closing

There is a structural gap between the teams advancing frontier AI and the environments where the richest security operations data actually exists.

Frontier AI talent is concentrated in big technology companies and AI labs, where world-class research infrastructure and deep model-building expertise are most abundant. What those organizations typically lack, however, is direct access to the complex, high-volume, longitudinal business context inside enterprises. That data is sensitive, permissioned, fragmented across many systems, and inseparable from the operational workflows that give it meaning. It cannot simply be exported, centralized, and studied in isolation.

Security companies are much closer to that operational reality, but they are usually optimized for building and shipping products rather than advancing the frontier of AI research. Cogent was created to bridge these worlds: an AI-native team with the technical depth to build sophisticated models, operating close enough to the execution layer to understand the data, context, and outcome signals that truly matter.


Why the execution layer matters

Most security data lacks the element that matters most: decision context. Alerts, scans, misconfigurations, and exposure data can show what happened. They rarely explain why a choice was made, what constraints shaped it, who owned the system, what alternatives were considered, or which actions ultimately worked.

Cogent operates where tickets get created, exceptions get granted, patches get rolled out, and outcomes get verified. Over time, that lets us build systems that learn an organization's security reasoning: how they trade off risk versus uptime, how they prioritize across teams, what evidence they trust. Agents can then emulate expert judgment within the organization’s real operating constraints. Cogent’s advantage is not just access to security data. It is the ability to turn operational context into intelligence agents can reliably use, and apply those insights in our research.


The team

Duc Hiep Chu leads the research effort as Head of Research. He spent years at Google DeepMind, has over 4,000 academic citations, and co-authored the CodeMender paper on AI agents that automatically generate fixes for code vulnerabilities. His background in formal verification maps directly to the hardest open problems in autonomous security.

Anirudh Ravula, our Head of AI, also comes from Google DeepMind, where he worked on transformer architectures and large language models during their foundational development. He brings hands-on experience building the core technologies behind today's most capable AI systems, now applied to security.

They left one of the most prestigious AI labs in the world because the work that matters most requires domain-specific data and feedback loops that pure research settings cannot provide.


What we're working on

The research agenda spans six areas, each tied to a specific gap in what AI can do for security today.

Autonomous security agents. Building agents that take reliable, verified action in mission-critical environments. The model is controlled autonomy: agents act within guardrails, with auditable steps and approvals where needed. Autonomy increases as trust and accuracy compound.

Formal security proof systems. Applying formal verification to AI-driven actions so that before an agent executes a remediation, there is mathematical certainty it fixes the vulnerability, does not introduce new ones, and preserves intended system behavior.

Security benchmarking. There is no high-fidelity benchmark for defensive security tasks. Existing ones lack real-world data, cover too narrow a range of domains, or suffer from data contamination. We are building a representative dataset and evaluation framework the AI community can use as a standard reference.

AI reasoning for security. Using reinforcement learning environments that simulate enterprise security with real-world data patterns, training agents that develop genuine instincts about what matters, what is noise, and what to do next.

Post-trained open-source security models. Fine-tuned open-source models optimized for security tasks, released as open weights for the community.

AI threat research. The AI ecosystem itself is becoming a new attack surface. Agent skills and tool registries are effectively executable code marketplaces, and they are already being weaponized. In one recent wave, the most downloaded agent skill in a popular registry turned out to be malware. Existing security frameworks were not designed for this category of risk.


Contributing back

We are committed to sharing our work. Open-source benchmarks, published research, and post-trained models for anyone working at the intersection of AI and security. When new models are released, we want Cogent's security benchmark to be a standard reference for evaluating defensive AI capabilities.

Learn more about the team and our research priorities at cogent.com/research.