Synthetic intelligence (AI) holds large promise for enhancing cyber protection and making the lives of safety practitioners simpler. It will possibly assist groups lower by way of alert fatigue, spot patterns sooner, and convey a stage of scale that human analysts alone cannot match. However realizing that potential depends upon securing the programs that make it doable.
Each group experimenting with AI in safety operations is, knowingly or not, increasing its assault floor. With out clear governance, robust identification controls, and visibility into how AI makes its choices, even well-intentioned deployments can create threat sooner than they cut back it. To really profit from AI, defenders must method securing it with the identical rigor they apply to another important system. Meaning establishing belief within the information it learns from, accountability for the actions it takes, and oversight for the outcomes it produces. When secured appropriately, AI can amplify human functionality as a substitute of changing it to assist practitioners work smarter, reply sooner, and defend extra successfully.
Establishing Belief for Agentic AI Techniques
As organizations start to combine AI into defensive workflows, identification safety turns into the muse for belief. Each mannequin, script, or autonomous agent working in a manufacturing atmosphere now represents a brand new identification — one able to accessing information, issuing instructions, and influencing defensive outcomes. If these identities aren’t correctly ruled, the instruments meant to strengthen safety can quietly turn into sources of threat.
The emergence of Agentic AI programs make this particularly necessary. These programs do not simply analyze; they might act with out human intervention. They triage alerts, enrich context, or set off response playbooks beneath delegated authority from human operators. Every motion is, in impact, a transaction of belief. That belief should be sure to identification, authenticated by way of coverage, and auditable finish to finish.
The identical ideas that safe folks and companies should now apply to AI brokers:
- Scoped credentials and least privilege to make sure each mannequin or agent can entry solely the information and capabilities required for its process.
 - Sturdy authentication and key rotation to stop impersonation or credential leakage.
 - Exercise provenance and audit logging so each AI-initiated motion may be traced, validated, and reversed if obligatory.
 - Segmentation and isolation to stop cross-agent entry, making certain that one compromised course of can not affect others.
 
In follow, this implies treating each agentic AI system as a first-class identification inside your IAM framework. Every ought to have an outlined proprietor, lifecycle coverage, and monitoring scope identical to any person or service account. Defensive groups ought to repeatedly confirm what these brokers can do, not simply what they have been meant to do, as a result of functionality typically drifts sooner than design. With identification established as the muse, defenders can then flip their consideration to securing the broader system.
Securing AI: Finest Practices for Success
Securing AI begins with defending the programs that make it doable — the fashions, information pipelines, and integrations now woven into on a regular basis safety operations. Simply as
we safe networks and endpoints, AI programs should be handled as mission-critical infrastructure that requires layered and steady protection.
The SANS Safe AI Blueprint outlines a Shield AI monitor that gives a transparent start line. Constructed on the SANS Important AI Safety Pointers, the blueprint defines six management domains that translate immediately into follow:
- Entry Controls: Apply least privilege and robust authentication to each mannequin, dataset, and API. Log and overview entry repeatedly to stop unauthorized use.
 - Information Controls: Validate, sanitize, and classify all information used for coaching, augmentation, or inference. Safe storage and lineage monitoring cut back the chance of mannequin poisoning or information leakage.
 - Deployment Methods: Harden AI pipelines and environments with sandboxing, CI/CD gating, and red-teaming earlier than launch. Deal with deployment as a managed, auditable occasion, not an experiment.
 - Inference Safety: Shield fashions from immediate injection and misuse by implementing enter/output validation, guardrails, and escalation paths for high-impact actions.
 - Monitoring: Repeatedly observe mannequin habits and output for drift, anomalies, and indicators of compromise. Efficient telemetry permits defenders to detect manipulation earlier than it spreads.
 - Mannequin Safety: Model, signal, and integrity-check fashions all through their lifecycle to make sure authenticity and stop unauthorized swaps or retraining.
 
These controls align immediately NIST’s AI Threat Administration Framework and the OWASP High 10 for LLMs, which highlights the most typical and consequential vulnerabilities in AI programs — from immediate injection and insecure plugin integrations to mannequin poisoning and information publicity. Making use of mitigations from these frameworks inside these six domains helps translate steering into operational protection. As soon as these foundations are in place, groups can concentrate on utilizing AI responsibly by figuring out when to belief automation and when to maintain people within the loop.
Balancing Augmentation and Automation
AI programs are able to helping human practitioners like an intern that by no means sleeps. Nevertheless, it’s important for safety groups to distinguish what to automate from what to enhance. Some duties profit from full automation, particularly these which might be repeatable, measurable, and low-risk if an error happens. Nevertheless, others demand direct human oversight as a result of context, instinct, or ethics matter greater than velocity.
Menace enrichment, log parsing, and alert deduplication are prime candidates for automation. These are data-heavy, pattern-driven processes the place consistency outperforms creativity. In contrast, incident scoping, attribution, and response choices depend on context that AI can not absolutely grasp. Right here, AI ought to help by surfacing indicators, suggesting subsequent steps, or summarizing findings whereas practitioners retain determination authority.
Discovering that stability requires maturity in course of design. Safety groups ought to categorize workflows by their tolerance for error and the price of automation failure. Wherever the chance of false positives or missed nuance is excessive, hold people within the loop. Wherever precision may be objectively measured, let AI speed up the work.
Be part of us at SANS Surge 2026!
I am going to dive deeper into this matter throughout my keynote at SANS Surge 2026 (Feb. 23-28, 2026), the place we’ll discover how safety groups can guarantee AI programs are protected to rely on. In case your group is shifting quick on AI adoption, this occasion will aid you transfer extra securely. Be part of us to attach with friends, be taught from specialists, and see what safe AI in follow actually appears like.
Register for SANS Surge 2026 right here.
Word: This text was contributed by Frank Kim, SANS Institute Fellow.
