Wednesday, June 18, 2025

Rafay Launches Serverless Inference Providing


Sunnyvale, CA – Might 8, 2025 – Rafay Programs, a cloud-native and AI infrastructure orchestration and administration firm, introduced basic availability of the corporate’s Serverless Inference providing, a token-metered API for operating open-source and privately skilled or tuned LLMs.

The corporate stated many NVIDIA Cloud Suppliers (NCPs) and GPU Clouds are already leveraging the Rafay Platform to ship a multi-tenant, Platform-as-a-Service expertise to their prospects, full with self-service consumption of compute and AI functions. These NCPs and GPU Clouds can now ship Serverless Inference as a turnkey service at no further value, enabling their prospects to construct and scale AI functions quick, with out having to cope with the fee and complexity of constructing automation, governance, and controls for GPU-based infrastructure.

The International AI inference market is anticipated to develop to $106 billion in 2025, and $254 billion by 2030. Rafay’s Serverless Inference empowers GPU Cloud Suppliers (GPU Clouds) and NCPs to faucet into the booming GenAI market by eliminating key adoption obstacles—automated provisioning and segmentation of complicated infrastructure, developer self-service, quickly launching new GenAI fashions as a service, producing billing information for on-demand utilization, and extra.

“Having spent the final yr experimenting with GenAI, many enterprises at the moment are centered on constructing agentic AI functions that increase and improve their enterprise choices. The power to quickly eat GenAI fashions by way of inference endpoints is essential to quicker improvement of GenAI capabilities. That is the place Rafay’s NCP and GPU Cloud companions have a fabric benefit,” stated Haseeb Budhani, CEO and co-founder of Rafay Programs.

“With our new Serverless Inference providing, accessible totally free to NCPs and GPU Clouds, our prospects and companions can now ship an Amazon Bedrock-like service to their prospects, enabling entry to the newest GenAI fashions in a scalable, safe, and cost-effective method. Builders and enterprises can now combine GenAI workflows into their functions in minutes, not months, with out the ache of infrastructure administration. This providing advances our firm’s imaginative and prescient to assist NCPs and GPU Clouds evolve from working GPU-as-a-Service companies to AI-as-a-Service companies.”
By providing Serverless Inference as an on-demand functionality to downstream prospects, Rafay helps NCPs and GPU Clouds deal with a key hole available in the market. Rafay’s Serverless Inference providing offers the next key capabilities to NCPs and GPU Clouds:

  • Seamless developer integration: OpenAI-compatible APIs require zero code migration for present functions, with safe RESTful and streaming-ready endpoints that dramatically speed up time-to-value for finish prospects.

  • Clever infrastructure administration: Auto-scaling GPU nodes with right-sized mannequin allocation capabilities dynamically optimize sources throughout multi-tenant and devoted isolation choices, eliminating over-provisioning whereas sustaining strict efficiency SLAs.

  • Constructed-in metering and billing: Token-based and time-based utilization monitoring for each enter and output offers granular consumption analytics, whereas integrating with present billing platforms by way of complete metering APIs and enabling clear, consumption-based pricing fashions.

  • Enterprise-grade safety and governance: Complete safety by way of HTTPS-only API endpoints, rotating bearer token authentication, detailed entry logging, and configurable token quotas per crew, enterprise unit, or utility fulfill enterprise compliance necessities.

  • Observability, storage, and efficiency monitoring: Finish-to-end visibility with logs and metrics archived within the supplier’s personal storage namespace, help for backends like MinIO- a high-performance, AWS S3-compatible object storage system, and Weka-a high-performance, AI-native information platform; in addition to a centralized credential administration guarantee full infrastructure and mannequin efficiency transparency.

Rafay’s Serverless Inference providing is obtainable immediately to all prospects and companions utilizing the Rafay Platform to ship multi-tenant, GPU and CPU based mostly infrastructure. The corporate can be set to roll out fine-tuning capabilities shortly.  These new additions are designed to assist NCPs and GPU Clouds quickly ship high-margin, production-ready AI companies, eradicating complexity.



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles

PHP Code Snippets Powered By : XYZScripts.com