What's new in Cogent: Multi-tier system ownership. Read about it here

Product

Agentic AI

Customers

Resources

Company

Product

Agentic AI

Customers

Resources

Company

Product

Agentic AI

Customers

Resources

Company

Product

Agentic AI

Customers

Resources

Company

Product

Agentic AI

Customers

Resources

Company

Autonomy levels in vulnerability management

Most teams don't jump straight to full automation. The safer path is earned autonomy: start with recommendations and review, then expand what the system can do as trust grows.

Most teams don't jump straight to full automation. The safer path is earned autonomy: start with recommendations and review, then expand what the system can do as trust grows.

A practical autonomy model for vulnerability management

This framework maps how security teams typically progress from manual processes to self-healing infrastructure. Each level builds on the previous one.

Level 0: Manual vulnerability management

Humans do everything. Vulnerability scanners report findings, but all investigation and coordination is manual.

What this looks like:

  • Security analysts manually review scanner outputs

  • Ownership determined by asking around or checking stale CMDBs

  • Tickets created one-by-one with generic descriptions

  • No systematic verification that fixes worked

Level 1: AI-Assisted Investigation

The platform validates which findings are real and relevant in your environment and autonomously investigates and enriches data (e.g. asset ownership).

What AI does autonomously:

  • Validates which findings are truly critical vulnerabilities vs. false positives

  • Gathers asset context

  • Assesses exploitability 

  • Enriches each finding with business context (criticality, sensitivity, compliance scope)

What humans do:

  • Review AI-enriched data

  • Make all prioritization and routing decisions manually

  • Create and assign tickets themselves

What gets logged:

  • Every inference with confidence score and data sources

  • Activity timeline showing which agents investigated what

Level 2: Supervised workflow automation

The platform drafts and routes tickets with full context (ownership, why it matters, steps to fix), but humans approve dispatch and actually perform the remediation steps manually.

What AI does autonomously:

  • Everything from Level 1, plus:

  • Bundles related vulnerabilities into actionable remediation tasks

  • Generates context-rich tickets explaining: what's at risk, why it matters to the business, step-by-step remediation guidance, and expected impact

  • Routes tickets to correct teams based on asset ownership and remediation type

  • Proposes SLA deadlines based on risk level and policy

What humans do:

  • Review and approve each ticket before it's sent

  • Can edit ticket content, routing, or SLA

  • Manually track remediation progress

  • Approve exceptions and policy changes

What gets logged:

  • All ticket drafts with reasoning chains

  • Human edits and approval timestamps

  • Which agent generated which recommendation

Level 3: Manual Remediation

The platform generates fix artifacts (e.g. PRs, IaC patches, config diffs, etc.) plus a clear explanation of impact and rollback considerations. Humans review and merge/run.

What AI does autonomously:

  • Everything from Level 2, plus:

  • Automatically creates and routes work without human approval (for pre-approved workflows)

  • Tracks remediation progress and sends escalations for SLA breaches

  • Updates remediation plans when new information arrives (new scan results, ownership changes)

  • Closes tickets automatically when verification confirms fix

What humans still approve:

  • Exception requests from remediation teams

  • Policy changes or SLA adjustments

  • High-confidence-threshold actions (configurable, e.g., >95% confidence proceeds, <95% escalates)

What humans do manually:

  • Actually perform the remediation (apply patches, update configs, deploy fixes)

What gets logged:

  • Ticket creation with full reasoning

  • Escalation triggers and notifications sent

  • Verification checks performed and results

Level 4: Autonomous Remediation (Supervised)

The platform can apply fixes automatically to lower environments within pre-approved guardrails and escalates exceptions or uncertainty to humans.

What AI does autonomously:

  • Everything from Level 3, plus:

  • Provides clear explanation of impact and rollback steps

  • Applies fixes automatically in lower environments (dev, staging, test)

  • Validates fixes worked through post-deployment verification

What humans still approve:

  • Review and merge AI-generated fixes for production

  • Approve deployment to production environments

  • Validate fixes that failed automated verification

What gets logged:

  • Fix generation with impact analysis

  • Automated deployment results in lower environments

  • Verification checks (passed/failed) with evidence

Level 5: Self-Healing Apps and Infrastructure

Fully autonomous remediation for all environments, including production. Brings you to the full “self healing technology” vision.

What AI does autonomously:

  • Everything from Level 4, plus:

  • Fully autonomous remediation across all environments including production

  • Proactive hardening of systems based on threat intelligence

  • Adaptive learning from remediation outcomes to optimize future fixes

  • Continuous verification and automatic re-remediation if vulnerabilities reappear

What humans do:

  • Monitor dashboards showing autonomous actions taken

  • Investigate anomalies flagged by AI

  • Adjust policies and guardrails as needed

  • Override when necessary (human authority always available)

What gets logged:

  • Every autonomous action with complete audit trail

  • Verification and re-verification cycles

  • Policy adjustments made by system based on outcomes

Configuring autonomy by context

Autonomy doesn’t have to be binary across your entire environment. Organizations often configure different levels for different contexts.

Example autonomy application by environment and context:

Environment/context

Environment/context

Development environment

Development environment

Staging/Test

Staging/Test

Production (internal tools)

Production (internal tools)

Production (customer-facing)

Production (customer-facing)

Configuration changes

Configuration changes

Standard patches

Standard patches

Architectural changes

Architectural changes

Autonomy level

Autonomy level

Level 4-5

Level 4-5

Level 3-4

Level 3-4

Level 3

Level 3

Level 2-3

Level 2-3

Level 3-4

Level 3-4

Level 2-3

Level 2-3

Level 2

Level 2

Rationale

Rationale

Low business impact, fast feedback loops valuable

Low business impact, fast feedback loops valuable

Moderate risk, good for piloting autonomous fixes

Moderate risk, good for piloting autonomous fixes

Higher risk, but lower impact to customers

Higher risk, but lower impact to customers

High risk, requires approval

High risk, requires approval

Medium risk, reversible action

Medium risk, reversible action

Medium risk, sometimes irreversible

Medium risk, sometimes irreversible

High impact, requires review

High impact, requires review

Common mistakes to avoid

Mistake: Jumping to Level 3+ without validating Level 1-2 accuracy

Why it fails: If enrichment and routing are wrong, autonomous actions amplify those errors. Build trust in data quality first.

Mistake: Same autonomy level for all contexts

Why it fails: Development environments can handle more autonomy than production. Compliance-scoped assets need stricter controls. Context matters.

Mistake: No confidence thresholds

Why it fails: Low-confidence recommendations should escalate to humans. Without thresholds, incorrect actions get taken automatically.

Mistake: Insufficient audit logging

Why it fails: If something goes wrong, you need to understand what happened and why. Audit trails are non-negotiable.