What's new in Cogent: Multi-tier system ownership. Read about it here
Autonomy levels in vulnerability management
Why “full autopilot” isn’t always the right starting point
AI can move faster than your organization's ability to trust it. The teams that succeed with AI-driven vulnerability management start small and expand autonomy deliberately.
The reality of adopting AI autonomy in vulnerability management:
Security teams need to validate outputs before routing work to engineering
Engineering teams need confidence AI recommendations won't introduce new problems
Compliance requires audit trails showing human approval for high-impact changes
Different environments (dev vs. production) warrant different levels of automation
Many organizations start with read-only assistance and then add supervised automation where it's safe. Higher autonomy is earned through demonstrated accuracy and verified outcomes.
A practical autonomy model for vulnerability management
This framework maps how security teams typically progress from manual processes to self-healing infrastructure. Each level builds on the previous one.
Level 0: Manual vulnerability management
Humans do everything. Vulnerability scanners report findings, but all investigation and coordination is manual.
What this looks like:
Security analysts manually review scanner outputs
Ownership determined by asking around or checking stale CMDBs
Tickets created one-by-one with generic descriptions
No systematic verification that fixes worked
Level 1: AI-Assisted Investigation
The platform validates which findings are real and relevant in your environment and autonomously investigates and enriches data (e.g. asset ownership).
What AI does autonomously:
Validates which findings are truly critical vulnerabilities vs. false positives
Gathers asset context
Assesses exploitability
Enriches each finding with business context (criticality, sensitivity, compliance scope)
What humans do:
Review AI-enriched data
Make all prioritization and routing decisions manually
Create and assign tickets themselves
What gets logged:
Every inference with confidence score and data sources
Activity timeline showing which agents investigated what
Level 2: Supervised workflow automation
The platform drafts and routes tickets with full context (ownership, why it matters, steps to fix), but humans approve dispatch and actually perform the remediation steps manually.
What AI does autonomously:
Everything from Level 1, plus:
Bundles related vulnerabilities into actionable remediation tasks
Generates context-rich tickets explaining: what's at risk, why it matters to the business, step-by-step remediation guidance, and expected impact
Routes tickets to correct teams based on asset ownership and remediation type
Proposes SLA deadlines based on risk level and policy
What humans do:
Review and approve each ticket before it's sent
Can edit ticket content, routing, or SLA
Manually track remediation progress
Approve exceptions and policy changes
What gets logged:
All ticket drafts with reasoning chains
Human edits and approval timestamps
Which agent generated which recommendation
Level 3: Manual Remediation
The platform generates fix artifacts (e.g. PRs, IaC patches, config diffs, etc.) plus a clear explanation of impact and rollback considerations. Humans review and merge/run.
What AI does autonomously:
Everything from Level 2, plus:
Automatically creates and routes work without human approval (for pre-approved workflows)
Tracks remediation progress and sends escalations for SLA breaches
Updates remediation plans when new information arrives (new scan results, ownership changes)
Closes tickets automatically when verification confirms fix
What humans still approve:
Exception requests from remediation teams
Policy changes or SLA adjustments
High-confidence-threshold actions (configurable, e.g., >95% confidence proceeds, <95% escalates)
What humans do manually:
Actually perform the remediation (apply patches, update configs, deploy fixes)
What gets logged:
Ticket creation with full reasoning
Escalation triggers and notifications sent
Verification checks performed and results
Level 4: Autonomous Remediation (Supervised)
The platform can apply fixes automatically to lower environments within pre-approved guardrails and escalates exceptions or uncertainty to humans.
What AI does autonomously:
Everything from Level 3, plus:
Provides clear explanation of impact and rollback steps
Applies fixes automatically in lower environments (dev, staging, test)
Validates fixes worked through post-deployment verification
What humans still approve:
Review and merge AI-generated fixes for production
Approve deployment to production environments
Validate fixes that failed automated verification
What gets logged:
Fix generation with impact analysis
Automated deployment results in lower environments
Verification checks (passed/failed) with evidence
Level 5: Self-Healing Apps and Infrastructure
Fully autonomous remediation for all environments, including production. Brings you to the full “self healing technology” vision.
What AI does autonomously:
Everything from Level 4, plus:
Fully autonomous remediation across all environments including production
Proactive hardening of systems based on threat intelligence
Adaptive learning from remediation outcomes to optimize future fixes
Continuous verification and automatic re-remediation if vulnerabilities reappear
What humans do:
Monitor dashboards showing autonomous actions taken
Investigate anomalies flagged by AI
Adjust policies and guardrails as needed
Override when necessary (human authority always available)
What gets logged:
Every autonomous action with complete audit trail
Verification and re-verification cycles
Policy adjustments made by system based on outcomes
Configuring autonomy by context
Autonomy doesn’t have to be binary across your entire environment. Organizations often configure different levels for different contexts.
Example autonomy application by environment and context:
Common mistakes to avoid
Mistake: Jumping to Level 3+ without validating Level 1-2 accuracy
Why it fails: If enrichment and routing are wrong, autonomous actions amplify those errors. Build trust in data quality first.
Mistake: Same autonomy level for all contexts
Why it fails: Development environments can handle more autonomy than production. Compliance-scoped assets need stricter controls. Context matters.
Mistake: No confidence thresholds
Why it fails: Low-confidence recommendations should escalate to humans. Without thresholds, incorrect actions get taken automatically.
Mistake: Insufficient audit logging
Why it fails: If something goes wrong, you need to understand what happened and why. Audit trails are non-negotiable.
NEXT




