CoreWeave rents out Nvidia A100 and H100 GPUs for use by its customers through its CoreWeave Cloud. This has been built for large-scale workloads needing GPUs, with more than 3,500 H100 Tensor Core GPUs available in one of the largest HGX clusters installed anywhere. Its GPU infrastructure is supported by x86 compute, storage and networking resources, and active in running generative AI workloads.
CEO and co-founder Michael Intrator revealed in a statement that CoreWeave is going to: “partner with VAST Data to deliver a multi-tenant and zero-trust environment purpose-built for accelerated compute use cases like machine learning, VFX and rendering, Pixel Streaming and batch processing, that’s up to 35 times faster and 80 percent less expensive than legacy cloud providers. This partnership is rooted in a deep technical collaboration that will push the boundaries of data-driven GPU computing to deliver the world’s most optimized AI cloud platform.”
Vast co-founder Jeff Denworth said: “The deep learning gold rush is on, and CoreWeave is at the epicenter of the action.”
Up until now CoreWeave Cloud Storage Volumes have been built on top of Ceph, with triple replication, and distributed across multiple servers and datacenter racks. It features disk drive and all-flash NVMe tiers, and both file and S3 object access protocol support.
This is a significantly large order for VAST, and CoreWeave will deploy the VAST arrays to store, manage and secure hundreds of petabytes of data for generative AI, high performance computing (HPC) and visual effects (VFX) workloads.
CoreWeave was funded in 2017, as an Ethereum mining company, and then pivoted in 2019 to AI. It raised $421 million in a B-round last April, another $200 million in May, plus $2.3 billion debt financing in August to expand its infrastructure. This was secured by CoreWeave using its GPUs as collateral. It reckons it will have 14 datacenters in place across the USA by the end of the year, and earn $500 million in revenue, with $2 billion contracted already for 2024.
Nvidia is an investor in CoreWeave and also in VAST, which is Nvidia SuperPOD-certified.
A statement from Renen Hallak, founder and CEO of VAST Data, was complementary to CoreWeave, saying: “Since our earliest days, VAST Data has had a single vision of building an architecture that could power the needs of the most demanding cloud-scale AI applications. We could not imagine a better cloud platform to realize this vision than what we’re creating with CoreWeave. We are humbled and honored to partner with the CoreWeave team to push the boundaries of modern AI computing and to build the infrastructure that will serve as the foundation of tomorrow’s AI-powered discoveries.”
CoreWeave said its supercomputing-class infrastructure trained the new MLPerf GPT-3 175B large language model (LLM) in under 11 minutes – more than 29x faster and 4x larger than the next best competitor. A VAST Data blog, soon to be live on its site, provides more background info.