As we enter one other yr outlined by the adoption of AI, CISOs face extra cyberthreats and elevated demand to defend their organizations. InformationWeek spoke with 5 CISOs to get a way of what they anticipate from the expertise in 2026: how it is going to be used within the palms of menace actors, its capabilities as a defensive software, and what safety leaders need and want from the expertise because it turns into more and more ingrained within the cloth of their tech stacks.
The menace panorama
In 2025, menace actors used AI to hone their campaigns and develop the scale of assaults. Phishing assaults obtained tougher to identify; AI simply removes the outdated inform of poor grammar. And AI makes it simpler to solid hyperpersonalized lures for extra victims.
“Proper now, we’re seeing about 90% of social engineering phishing kits have AI deepfake expertise accessible in them,” stated Roger Grimes, CISO advisor at safety consciousness coaching firm KnowBe4.
To this point, AI has sharpened outdated techniques. That pattern is simply going to ramp up as time goes on. Wendi Whitmore, chief safety intelligence officer at cybersecurity firm Palo Alto Networks, described many of the assaults fueled by AI as “evolutionary and never revolutionary,” however that would change as menace actors transfer via their very own studying curves.Â
The cyberattack executed by suspected Chinese language state actors who manipulated Anthropic presaged the way forward for cyberattacks: large-scale and largely autonomous.
“The way forward for cybercrime is dangerous guys’ AI bots in opposition to good guys’ AI bots, and one of the best algorithms will win,” Grimes stated. “That is the way forward for all cybersecurity from right here on out.”
As that future approaches, hackers will search for methods to make use of AI to execute assaults and seek for vulnerabilities within the AI techniques and instruments that enterprises use.
“The factor that’s most regarding to a CISO is that the LLMs are going to be the honeypots. That is going to be the place that any hacker’s going to wish to assault as a result of that is the place all the info’s at,” stated Jill Knesek, CISO at BlackLine, a cloud-based monetary operations administration platform.
Grimes additionally anticipated a rise in hacks of the mannequin context protocol (MCP), Anthropic’s open supply customary that allows AI techniques to speak with exterior techniques. Risk actors can leverage strategies corresponding to immediate injection to use MCP servers.
As extra widespread assaults perpetrated by AI happen, CISOs will grapple with questions round identification, in keeping with Whitmore. “I do not suppose that the trade collectively has a superb understanding but of who’s accountable when it is really an artificial identification that has created a large, widespread assault,” she stated. That duty might relaxation on the enterprise unit that deployed it, the CISO who permitted using the instruments or the precise group that is leveraging it.
AI as a cyberdefense software
As menace actors beef up their AI capabilities, so should defenders. In 2025, CISOs discovered simply what AI can do for his or her cybersecurity methods. AI’s potential to sift via mountains of knowledge and discern patterns proved to be certainly one of its foremost boons for cybersecurity groups.
“Inside for my group, that has been a recreation changer as a result of we’re seeing that now my menace analysts can take 10 minutes to analysis one thing as a substitute of an hour going to separate instruments,” stated Don Pecha, CISO at FNTS, a managed cloud and mainframe providers firm for regulated industries.Â
AI can discover the proverbial needle within the haystack of threats, each actual and false positives, and allow analysts to make quicker selections. It could automate a lot of the digging and assessment that beforehand represented numerous tedious, handbook work for analysts.
Whereas cybersecurity groups seize these advantages, AI as a cyberdefense software has loads of room to develop. “We’re not seeing … actually purpose-built AI safety for essentially the most half. What you are seeing is legacy safety with some AI functionality and performance,” Knesek stated.
As 2026 begins, extra AI safety options will emerge, notably within the realm of agentic AI. Grimes stated he expects that patching bots can be one sort of AI agent granted extra autonomy inside organizations.
“You are not going to have the ability to battle AI that is making an attempt to compromise you with conventional software program. You are going to want a patching bot,” he stated.
The phrase “human within the loop” is held up by AI adopters because the gold customary of accountable use, however as agentic AI takes off, CISOs and their groups should grapple with questions on how a lot autonomy these brokers are granted. What occurs when there’s much less and fewer human involvement?
“I believe some persons are going to say, ‘Oh, that is nice. I’ll consider all the pieces the seller stated. I’ll give it full autonomy,'” Grimes stated. That would result in operational interruption.
Moreover, as AI brokers grow to be extra autonomous, they are going to be frequent targets of malicious actors. “With a view to shield the human, you are going to have to guard the AI brokers that the human is utilizing,” Grimes stated.
The CISO’s AI want listingÂ
For all of the fevered predictions round AI, uncertainties stay for the longer term. CISOs should sustain with quickly altering expertise. As they press forward, what do they want and wish from the expertise?
-
Operational efficiencies. It is time for AI to ship. CISOs and CIOs need AI-driven instruments which have a measurable impression. “As we transfer ahead, they’ll anticipate to see increasingly purpose-built capabilities [where] actually you may simply measure operational efficiencies,” Knesek stated.That can be true of AI throughout enterprise features.
-
Quicker safety opinions. Many enterprises could have a look at rolling out a number of new applied sciences in 2026, a frightening prospect from a safety perspective. A successful resolution for automating important safety opinions has but to emerge. “At a tactical degree, that is one thing that CISOs and CIOs actually need to determine,” Whitmore stated. “That is all the pieces from the method piece of it to the precise expertise that is going to assist them speed up that.”
-
Belief. Prospects will ask their distributors harder questions to be able to keep belief. Corporations are reluctant to drag the veil again on their AI fashions, lest they offer up aggressive benefit, however that usually leaves prospects with little greater than assurances quite than concrete solutions.
“I perceive there’s numerous IP concerned, the way in which that these fashions are educated … nevertheless it’s very troublesome to onboard these and have a really fulsome understanding and assure that what has been offered in our conversations, even privateness insurance policies, et cetera, is actually occurring behind the scenes,” stated Chris Henderson, CISO at cybersecurity firm Huntress.
-
Higher governance. AI governance goes to be high of thoughts for CISOs within the new yr. “That is actually the place we’re struggling right now as a result of there’s solely actually two or three merchandise on the market that actually present governance in an enterprise to your AI,” Pecha stated.
There are many frameworks accessible for accountable AI use, however merely checking off gadgets on an inventory is not sufficient, he added.
“You may’t simply go depend on a NIST framework that stated an AI must be doing these items. That guidelines is important, however then how do you place in an operational software that validates what knowledge the AI was educated [on]?” Pecha requested.Â
CISOs will want methods to indicate that AI instruments their groups constructed internally and instruments they bring about in from third events are safe and used responsibly, whether or not via ongoing audits or some form of exterior certification.
-
Extra menace modeling. Grimes stated he needs to see extra menace modeling of AI, notably as using agentic AI ramps up. The place are the vulnerabilities? What has been performed to mitigate them? “The distributors that menace mannequin are going to have safer instruments and extra reliable instruments,” Grimes contended.
-
Extra nuance. AI techniques excel at gathering info and simplifying selections for people, however they’ve but to achieve a spot the place they’ll stand in for human decision-making.
“On the finish of the day, it is nonetheless not as correct as a human at figuring out ought to any person be woken up at 2 within the morning due to an occasion that has triggered?” Henderson stated. He added that he wish to see AI instruments transfer past giving binary responses to providing extra nuanced solutions that embrace how sure they’re of a choice or advice.
-
Midmarket options. Pecha stated he hopes AI makes safety extra accessible for small- to medium-sized companies. These companies do not have the safety budgets that enormous enterprises do, however they continue to be a part of the provision chain. “The most important danger now we have right now is small, medium companies should not served by the safety neighborhood properly. They do not have assets. They do not have information, however AI might be the stopgap for that,” he stated.
Whereas the controversy about AI and the way forward for jobs rages, there appears to be some expectation amongst CISOs that AI can be a software to enhance the capabilities of human cybersecurity groups quite than a expertise to interchange them solely.
“I believe they want extensions of their groups quite than replacements of their groups,” stated Henderson. “In case you have a look at AI as one thing that may allow your group to proceed to scale with out including extra our bodies versus changing our bodies, it is going to be the trail to success.”
