The Mythos Zero-Day Flood Is Here. Only AI Can Fix It. - Read more here

COGENT RESEARCH
Applied AI research for cybersecurity
Cogent Research works on the hardest problems in AI-driven security, with access to real enterprise data that no pure research setting can provide.
8,000+
Combined academic citations
2
Former Google DeepMind researchers
50%
Fortune 500 use team's prior work
Why cogent
A problem neither side can solve alone
Frontier AI talent clusters in big tech labs. Real security operations data lives inside enterprises. Nobody has had both. Cogent does.
AI Labs
World-class research talent and compute. No access to real enterprise security data.
Cogent
Frontier AI researchers with direct access to real operational data and outcome signals.
Security Vendors
Deep domain expertise and massive data. No frontier AI research capability.
THE TEAM
The researchers behind Cogent

Duc Hiep Chu
Head of Research
Former Google DeepMind. 4,000+ citations. Co-author of CodeMender, an AI agent that fixes code vulnerabilities before they reach downstream systems.

Anirudh Ravula
Head of AI
Former Google DeepMind. 4,000+ citations. Led teams building transformer architectures and large language models. Now applying that foundation to security.

Geng Sng
Co-founder & CTO
Built one of the best ML fraud detection systems in the world at Abnormal Security. Holds patents for the work. That technology is used by half the Fortune 500.
RESEARCH PRIORITIES
What we're working on
Autonomous Security Agents
Controlled autonomy: agents act within guardrails, auditable steps, and continuous verification. Autonomy increases as trust compounds.
Formal Security Proofs
Mathematical certainty that an action is correct before an agent executes it. Fix the vulnerability, don't introduce new ones, preserve intended behavior.
Security Benchmarking
A standard evaluation framework for defensive AI. Designed to handle data contamination, automatic evaluation, and the cost-quality trade-offs existing benchmarks miss.
AI Reasoning for Security
Reinforcement learning on real-world data. Agents develop security instincts through experience: what matters, what’s noise, what to do next.
Open-Source Models
Post-trained models for security tasks, released as open weights. Embedding models and agentic models for defensive workflows.
AI Threat Research
Agent skills and tool registries are becoming weaponized code marketplaces. We publish threat research on AI-specific attack vectors.
COGENT AI FELLOWSHIP
Do the work that matters most
A residency for researchers working in formal verification, reinforcement learning, or AI safety who want to apply it to real enterprise security data. The kind of domain-specific feedback loops that pure research settings cannot provide.

