WEKApod Nitro and WEKApod Prime Present Clients with Versatile, Reasonably priced, Scalable Options to Quick-Monitor AI Innovation
WekaIO (WEKA), the AI-native information platform firm, unveiled two new WEKApod™ information platform applianced: the WEKApod Nitro for large-scale enterprise AI deployments and the WEKApod Prime for smaller-scale AI deployments and multi-purpose high-performance information use instances. WEKApod information platform home equipment present turnkey options combining WEKA® Knowledge Platform software program with best-in-class high-performance {hardware} to offer a robust information basis for accelerated AI and fashionable performance-
intensive workloads.
The WEKA Knowledge Platform delivers scalable AI-native information infrastructure purpose-built for even probably the most demanding AI workloads, accelerating GPU utilization and retrieval-augmented technology (RAG) information pipelines effectively and sustainably whereas offering environment friendly write efficiency for AI mannequin checkpointing. Its superior cloud-native structure permits final deployment flexibility, seamless information portability, and strong hybrid cloud functionality.
WEKApod delivers all of the capabilities and advantages of WEKA Knowledge Platform software program in an easy-to-deploy equipment ultimate for organizations leveraging generative AI and different performance-intensive workloads throughout a broad spectrum of industries. Key advantages embody:
WEKApod Nitro: Delivers distinctive efficiency density at scale, delivering over 18 million IOPS in a single cluster, making it ultimate for large-scale enterprise AI deployments and AI answer suppliers coaching, tuning, and inferencing LLM basis fashions. WEKApod Nitro is licensed for NVIDIA DGX SuperPOD™. Capability begins at half a petabyte of usable information and is expandable in half-petabyte increments.
WEKApod Prime: Seamlessly handles high-performance information throughput for HPC, AI coaching and inference, making it ultimate for organizations that need to scale their AI infrastructure whereas sustaining price effectivity and balanced price-performance. WEKApod Prime provides versatile configurations that scale as much as 320 GB/s learn bandwidth, 96 GB/s write bandwidth, and as much as 12 million IOPS for patrons with much less excessive efficiency information processing necessities. This permits organizations to customise configurations with non-obligatory add-ons, in order that they solely pay for what they want and keep away from overprovisioning pointless elements. Capability begins at 0.4PB of usable information with choices extending as much as 1.4PB.
“Accelerated adoption of generative AI purposes and multi-modal retrieval-augmented technology has permeated the enterprise quicker than anybody may have predicted, driving the necessity for reasonably priced, highly-performant and versatile information infrastructure options that ship extraordinarily low latency, drastically cut back the associated fee per tokens generated and may scale to fulfill the present and future wants of organizations as their AI initiatives evolve,” stated Nilesh Patel, chief product officer at WEKA. “WEKApod Nitro and WEKApod Prime supply unparalleled flexibility and selection whereas delivering distinctive efficiency, vitality effectivity, and worth to speed up their AI tasks wherever and in every single place they want them to run.”
Join the free insideAI Information newsletter.
Be a part of us on Twitter: https://twitter.com/InsideBigData1
Be a part of us on LinkedIn: https://www.linkedin.com/company/insideainews/
Be a part of us on Fb: https://www.facebook.com/insideAINEWSNOW
Examine us out on YouTube!