Friday, March 14, 2025

Meta’s Llama Framework Flaw Exposes AI Programs to Distant Code Execution Dangers


A high-severity safety flaw has been disclosed in Meta’s Llama giant language mannequin (LLM) framework that, if efficiently exploited, might permit an attacker to execute arbitrary code on the llama-stack inference server.

The vulnerability, tracked as CVE-2024-50050, has been assigned a CVSS rating of 6.3 out of 10.0. Provide chain safety agency Snyk, however, has assigned it a essential severity ranking of 9.3.

“Affected variations of meta-llama are weak to deserialization of untrusted information, which means that an attacker can execute arbitrary code by sending malicious information that’s deserialized,” Oligo Safety researcher Avi Lumelsky stated in an evaluation earlier this week.

The shortcoming, per the cloud safety firm, resides in a element known as Llama Stack, which defines a set of API interfaces for synthetic intelligence (AI) software growth, together with utilizing Meta’s personal Llama fashions.

Particularly, it has to do with a distant code execution flaw within the reference Python Inference API implementation, was discovered to mechanically deserialize Python objects utilizing pickle, a format that has been deemed dangerous as a consequence of the potential of arbitrary code execution when untrusted or malicious information is loading utilizing the library.

Cybersecurity

“In eventualities the place the ZeroMQ socket is uncovered over the community, attackers might exploit this vulnerability by sending crafted malicious objects to the socket,” Lumelsky stated. “Since recv_pyobj will unpickle these objects, an attacker might obtain arbitrary code execution (RCE) on the host machine.”

Following accountable disclosure on September 24, 2024, the problem was addressed by Meta on October 10 in model 0.0.41. It has additionally been remediated in pyzmq, a Python library that gives entry to the ZeroMQ messaging library.

In an advisory issued by Meta, the corporate stated it fastened the distant code execution threat related to utilizing pickle as a serialization format for socket communication by switching to the JSON format.

This isn’t the primary time such deserialization vulnerabilities have been found in AI frameworks. In August 2024, Oligo detailed a “shadow vulnerability” in TensorFlow’s Keras framework, a bypass for CVE-2024-3660 (CVSS rating: 9.8) that would end in arbitrary code execution as a consequence of using the unsafe marshal module.

The event comes as safety researcher Benjamin Flesch disclosed a high-severity flaw in OpenAI’s ChatGPT crawler, which may very well be weaponized to provoke a distributed denial-of-service (DDoS) assault in opposition to arbitrary web sites.

The problem is the results of incorrect dealing with of HTTP POST requests to the “chatgpt[.]com/backend-api/attributions” API, which is designed to simply accept a listing of URLs as enter, however neither checks if the identical URL seems a number of instances within the checklist nor enforces a restrict on the variety of hyperlinks that may be handed as enter.

Llama Framework

This opens up a state of affairs the place a foul actor might transmit 1000’s of hyperlinks inside a single HTTP request, inflicting OpenAI to ship all these requests to the sufferer website with out trying to restrict the variety of connections or stop issuing duplicate requests.

Relying on the variety of hyperlinks transmitted to OpenAI, it gives a big amplification issue for potential DDoS assaults, successfully overwhelming the goal website’s sources. The AI firm has since patched the issue.

“The ChatGPT crawler could be triggered to DDoS a sufferer web site by way of HTTP request to an unrelated ChatGPT API,” Flesch stated. “This defect in OpenAI software program will spawn a DDoS assault on an unsuspecting sufferer web site, using a number of Microsoft Azure IP handle ranges on which ChatGPT crawler is operating.”

The disclosure additionally follows a report from Truffle Safety that common AI-powered coding assistants “suggest” hard-coding API keys and passwords, a dangerous piece of recommendation that would mislead inexperienced programmers into introducing safety weaknesses of their tasks.

“LLMs are serving to perpetuate it, probably as a result of they have been educated on all of the insecure coding practices,” safety researcher Joe Leon stated.

Information of vulnerabilities in LLM frameworks additionally follows analysis into how the fashions may very well be abused to empower the cyber assault lifecycle, together with putting in the ultimate stage stealer payload and command-and-control.

Cybersecurity

“The cyber threats posed by LLMs will not be a revolution, however an evolution,” Deep Intuition researcher Mark Vaitzman stated. “There’s nothing new there, LLMs are simply making cyber threats higher, quicker, and extra correct on a bigger scale. LLMs could be efficiently built-in into each part of the assault lifecycle with the steerage of an skilled driver. These talents are more likely to develop in autonomy because the underlying know-how advances.”

Latest analysis has additionally demonstrated a brand new methodology known as ShadowGenes that can be utilized for figuring out mannequin family tree, together with its structure, sort, and household by leveraging its computational graph. The method builds on a beforehand disclosed assault method dubbed ShadowLogic.

“The signatures used to detect malicious assaults inside a computational graph may very well be tailored to trace and establish recurring patterns, known as recurring subgraphs, permitting them to find out a mannequin’s architectural family tree,” AI safety agency HiddenLayer stated in a press release shared with The Hacker Information.

“Understanding the mannequin households in use inside your group will increase your total consciousness of your AI infrastructure, permitting for higher safety posture administration.”

Discovered this text fascinating? Comply with us on Twitter ï‚™ and LinkedIn to learn extra unique content material we publish.



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

PHP Code Snippets Powered By : XYZScripts.com