Tuesday, September 16, 2025

Risk Actors Weaponize HexStrike AI to Exploit Citrix Flaws Inside a Week of Disclosure


Sep 03, 2025Ravie LakshmananSynthetic Intelligence / Vulnerability

Risk actors try to leverage a newly launched synthetic intelligence (AI) offensive safety device known as HexStrike AI to use not too long ago disclosed safety flaws.

HexStrike AI, in keeping with its web site, is pitched as an AI‑pushed safety platform to automate reconnaissance and vulnerability discovery with an goal to speed up approved purple teaming operations, bug bounty looking, and seize the flag (CTF) challenges.

Per info shared on its GitHub repository, the open-source platform integrates with over 150 safety instruments to facilitate community reconnaissance, net software safety testing, reverse engineering, and cloud safety. It additionally helps dozens of specialised AI brokers which might be fine-tuned for vulnerability intelligence, exploit growth, assault chain discovery, and error dealing with.

Audit and Beyond

However in keeping with a report from Examine Level, risk actors try their palms on the device to achieve an adversarial benefit, trying to weaponize the device to use not too long ago disclosed safety vulnerabilities.

“This marks a pivotal second: a device designed to strengthen defenses has been claimed to be quickly repurposed into an engine for exploitation, crystallizing earlier ideas right into a broadly out there platform driving real-world assaults,” the cybersecurity firm mentioned.

Discussions on darknet cybercrime boards present that risk actors declare to have efficiently exploited the three safety flaws that Citrix disclosed final week utilizing HexStrike AI, and, in some instances, even flag seemingly susceptible NetScaler situations which might be then supplied to different criminals on the market.

Examine Level mentioned the malicious use of such instruments has main implications for cybersecurity, not solely shrinking the window between public disclosure and mass exploitation, but in addition serving to parallelize the automation of exploitation efforts.

What’s extra, it cuts down the human effort and permits for robotically retrying failed exploitation makes an attempt till they turn into profitable, which the cybersecurity firm mentioned will increase the “general exploitation yield.”

“The fast precedence is obvious: patch and harden affected methods,” it added. “Hexstrike AI represents a broader paradigm shift, the place AI orchestration will more and more be used to weaponize vulnerabilities shortly and at scale.”

CIS Build Kits

The disclosure comes as two researchers from Alias Robotics and Oracle Company mentioned in a newly revealed examine that AI-powered cybersecurity brokers like PentestGPT carry heightened immediate injection dangers, successfully turning safety instruments into cyber weapons through hidden directions.

“The hunter turns into the hunted, the safety device turns into an assault vector, and what began as a penetration take a look at ends with the attacker gaining shell entry to the tester’s infrastructure,” researchers Víctor Mayoral-Vilches and Per Mannermaa Rynning mentioned.

“Present LLM-based safety brokers are basically unsafe for deployment in adversarial environments with out complete defensive measures.”

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

PHP Code Snippets Powered By : XYZScripts.com