WEKA unwrapped version 3.14 of its eponymous parallel file system yesterday, appropriately enough on Pi Day.
In a blog announcing the update, marketing veep Colin Gallagher waxed on how Pi represents “infinite possibilities and challenges us to find meaning in seemingly random numbers.”
How does this relate to the latest update of the platform? Well, there are obvious parallels to AI and data science. Gallagher said the 3.x series is about extending interoperability and choice, and better resource sharing as customers build out their AI deployments.
But what’s really holding up AI deployments right now is the supply chain crisis. Gallagher said some customers were left facing “extended” waits for servers last year because of NIC shortages.
So Gallagher said WEKA was adding support for three new cards: Mellanox ConnectX-6 Dx, Intel’s E810 series, and Broadcom 57810s. This should give customers more variety and flexibility when it came to choosing servers, and help to reduce lead times, the company said.
The WEKA Data Platform has also been updated “to take full advantage of more and more CPU cores by using multiple containers running our WekaFS software to take full advantage of our partners’ increasingly powerful servers.”
“It also provides the means to increase scalability to 8000 nodes, improving on how a large cluster can be created,” Gallagher wrote.
Gallagher said the update adds more granular controls for customers running multiple AI workloads on a WEKA system, in the shape of client side QoS to throttle bandwidth from any client into the WEKA system.
“When you combine this QoS functionality with granular capacity quotas and organizational role restrictions, you have a powerful set of tools to help manage resources on the Weka system,” he said.
The update also extends WEKA’s multiprotocol support, with interoperability for S3 workloads, alongside the ability to share the same filesystem across NFS, SMB, and Posix.