Home Blog Page 126

Frore goes insane in the membrane with cooling tech

Frore Systems showed off a 64TB U.2 SSD cooled by its AirJet Mini at Flash Memory Summit 2023 that removes 40W of heat without using bulky heat sinks, fans or liquid cooling.

AirJet is a self-contained, solid-state, active heat sink module that’s silent, thin, and light. It measures 2.8mm x 27.5mm x 41.5mm and weighs 11g. It removes 5.2W of heat at a 21dBA noise level, while consuming a maximum 1W of power. AirJet Mini generates 1,750 Pascals of back pressure, claimed to be 10x higher than a fan, enabling thin and dust-proof devices that can run faster because excess heat is taken away.

Think of the AirJet Mini as a thin wafer or slab that sits on top of a processor or SSD and cools it by drawing in air, passes it over a heat spreader physically touching the device, then ejects it from another outlet. Alternatively, AirJet Mini can be connected to its target via a copper heat exchanger.

Inside the AirJet Mini are tiny membranes that vibrate ultrasonically and generate the necessary airflow without needing fans. Air enters though top surface vents and is moved as pulsating jets through the device and out through a side-mounted spout.

Cross-sectional AirJet Mini diagram from Frore Systems
Cross-sectional AirJet Mini diagram

AirJet is scalable, with additional heat removed by adding more wafers or chips. Each chip removes 5W of heat, two chips can remove 10W, three chips 15W, and so on. A more powerful AirJet Pro removes more heat – 10.5W of heat at 24dBA, while consuming a maximum 1.75W of power.

AirJet can be used to cool thin and light notebook processors or SSDs, and enable them to run faster without damage. Running faster produces more heat, which the AirJet Mini removes.

OWC Mercury Pro in its 3.5-inch carrier, with 8 x M.2 SSDS inside, 4 on top, 4 below, each fitted with an Air Jet Mini

OWC built a demonstration a portable SSD-based storage device in a 3.5-inch carrier, which it exhibited at FMS 2023. Inside the Mercury Pro casing are 8 x 8TB M.2 2280 format SSDs, each with an attached AirJet Mini. Its bandwidth is between 2.2GBps and 2.6GBps for sequential writes. We don’t know how it would perform without the Frore cooling slabs, though.

Speed increases may not be the only benefit, however, as a similar-sized Mercury Pro U.2 Dual has an audible fan. Frore’s cooling does away with the noise and needs less electricity.

We could see notebook, gaming system and portable SSD device builders using Frore membrane cooling technology so their devices can be more powerful without needing noisy fans, bulky heatsinks or liquid cooling. 

OWC has not committed to making this a product. Get a Frore AirJet Mini datasheet here.

NetApp revenue drops for yet another quarter

NetApp’s financial performance for a third consecutive quarter reflected the challenging economic climate as the company reported revenues were lower than expected, despite surpassing estimates.

In its first fiscal quarter of 2024, ended July 28, revenues were down 10 percent year-over-year to $1.43 billion. This was, however, above their forecast midpoint. NetApp reported profit of $149 million, representing a 30 percent fall from the previous year. The hybrid cloud segment generated revenues of $1.28 billion, a 12.3 percent decrease, while public cloud revenues stood at $154 million, marking a 16.7 percent increase. The public cloud annual run rate (ARR) saw a modest 6 percent rise to $619 million, but the direction of travel has some way to go to offset hybrid cloud revenue declines – $180 million this quarter.

CEO George Kurian said: “We delivered a solid start to fiscal year 2024 in what continues to be a challenging macroeconomic environment. We are managing the elements within our control, driving better performance in our storage business, and building a more focused approach to cloud.”

The market was affected by the challenging economic situation with muted demand and lengthened sales cycles. Billings dropped 17 percent annually to $1.3 billion. All-flash array sales were presented in ARR terms as being $2.8 billion, a 7 percent drop on a year ago. Positive momentum surrounds NetApp’s recently introduced the AFF C-Series array, which uses more affordable QLC flash. The product is pacing to be the quickest-growing all-flash system in the company’s history and NetApp expects AFA sales to rise. NetApp launched its SAN-specific ASA A-Series ONTAP systems in May and Kurian hopes they will “drive share gains in the $18 billion SAN market.”

Financial summary

  • Operating cash flow: $453 million up 61 percent year-over-year
  • EPS: $0.69 vs $0.96 a year ago
  • Share repurchases and dividends: $506 million
  • Cash, cash equivalents, and investments: $2.98 billion

A concerning note is that NetApp’s product sales have been on a downward trend for the past five quarters. At present, they stand at $590 million, marking a 25 percent year-on-year decline. Service revenues witnessed a 5 percent rise to $842 million but fell short of offsetting the decline in product sales.

Addressing the performance of NetApp’s all-flash and public cloud revenues, Kurian mentioned “focusing our enterprise sellers on the flash opportunity and building a dedicated model for cloud … The changes have been well received, are already showing up in pipeline expansion, and should help drive top line growth in the second half.” 

Kurian said flash revenues were particularly good last year because NetApp benefited “from elevated levels of backlog that we shipped in the comparable quarter last year. If you remove that backlog, flash actually grew year-on-year this quarter.” NetApp is second in the AFA market behind Dell, according to IDC. The CEO expects NetApp’s “overall flash portfolio to grow as a percentage of our business through the course of the year,” with AFA sales growing faster than hybrid flash/disk array sales.

Public cloud is a particular problem, with Kurian saying: “I want to acknowledge our cloud results have not been where we want them to be and assure you we are taking definitive action to hone our approach and get back on track … First party storage services, branded and sold by our cloud partners, position us uniquely and represent our biggest opportunity.” That means AWS, Azure, and Google, with news coming about the NetApp-Google offerings.

He added in the earnings call that “subscription is where we saw a challenge, both a small part of cloud storage subscription as well as CloudOps and we are conducting a review” to find out more clearly where things went wrong.

NetApp acknowledged the surge in interest in generative AI and said it was well represented in customers’ projects.

NetApp and Pure Storage

NetApp has been announcing its all-flash array storage ARR numbers for the past seven quarters. That gives us a means of comparing them to Pure Storage’s quarterly revenues, either by dividing NetApp’s AFA numbers by four to get a quarterly number, or multiplying Pure’s quarterly revenues by four to get an ARR. We chose the former, normalized it for Pure’s financial quarters, and charted the result: 

By our reckoning Pure’s revenues, based on all-flash sales, passed NetApp’s last quarter, but NetApp has regained the lead with its latest quarter’s revenues.

NetApp was asked in the earnings call about Pure’s assertion “that there won’t be any new spinning disk drives manufactured in five years.” Kurian disputed this, saying: “When you cannot support a type of technology, like our competitors cannot, then you have to throw grenades and say that that technology doesn’t exist because you frankly can’t support it.”

Next quarter’s revenues are expected to be $1.53 billion, with a possible deviation of $75 million. This represents an estimated 8 percent annual decline. Kurian said NetApp expects to see “top line growth in the back half of FY’24,” meaning the second and third quarters should see a revenue uptick. That should be mostly due to increased AFA sales with some contribution from the public cloud business.

Infinidat leads in GigaOm ransomware protection reports

Analyst GigaOm has released two Sonar reports looking at the state of ransomware protection in block and file-based primary storage, with Infinidat first in block and fifth in file, the highest-placed vendor overall.

Sonar reports provide overviews of suppliers in an emerging product technology class, and places them in a semi-circular space divided into feature play and platform play quarter circles and two concentric half rings overlapping them. The outer ring is for challengers and the inner one for leaders. The closer suppliers are placed to the central point of the semicircle, the more innovative they are. Suppliers are divided into fast movers (black arrow) in terms of their product and market development or forward movers. It all becomes clear when you look at a Sonar diagram so here is the one for primary block storage suppliers:

Block Sonar featuring Infinidat

There are 10 block storage suppliers, the top three being Infinidat, Dell, and Pure Storage with platform plays. IBM, Hitachi Vantara together with challengers Nutanix and StorONE form a more dispersed group of platform players, with HPE and NetApp plus DDN forming a feature play threesome.

File-based ransomware is the most pervasive and uses a combination of techniques to remain unnoticed and spread silently. Block-based ransomware is less common, but encrypts entire data volumes, making recovery much harder than file-based attacks. The suppliers are ranked in terms of their timely identification, alerting, and mitigation of attacks.

The report authors, Max Mortillaro and Arjan Timmerman, say:

  • Infinidat offers a complete and balanced ransomware protection solution with InfiniSafe technology included, which has been enhanced this year with InfiniSafe Cyber Detection, an ML-based, petabyte-class, proactive ransomware detection solution. 
  • Dell Technologies’ PowerMax solution offers solid ransomware protection capabilities, while PowerStore and PowerFlex are gradually being improved by inheriting some of PowerMax’s features. These are augmented by Dell CloudIQ’s AIOps platform’s proactive detection capabilities, propelling the company forward in the Sonar.
  • Pure Storage provides ransomware protection through its SafeMode snapshots (now enabled by default) and strong multifactor authentication (MFA), while it has added security posture features to its solution. It also delivers a ransomware recovery SLA, an add-on service that helps customers recover faster in case of an attack.

The file report focuses on NAS offerings and it looks at 16 suppliers:

File Sonar featuring Infinidat

There are four closely placed fast-moving leaders, Racktop, Cohesity, NetApp and infinidat, with a platform-focused approach, and then CTERA with a more feature-focused product. It’s in front of slower-moving Nasuni, also a  leader, and challenger Panzura, which is crossing into the leader’s area. Nutanix is the trailing platform play leader.

Dell, Pure, and StorOne are platform-oriented challengers while Hitachi Vantara and WEKA trail them with less of a platform focus. We see DDN, IBM, and Hammerspace as feature play-oriented challengers.

The report authors note:

  • RackTop’s system intertwines file storage and data security within its BrickStor SP software-defined solution, embedding a comprehensive set of ransomware protection capabilities delivered through a zero-trust approach. 
  • Infinidat offers a complete and balanced ransomware protection solution with InfiniGuard CyberRecovery, which has been enhanced this year with InfiniSafe Cyber Detection, an ML-based, petabyte-class proactive ransomware detection solution. 
  • Cohesity continues to deliver on the cyber resiliency promise with proactive ML-based detection, on-premises and cloud-based immutability capabilities, and strong zero-trust and anti-tampering features, augmented with data protection and data management features inherent in the platform. 
  • NetApp delivers a strong set of ransomware protection capabilities at multiple layers, while also working on rationalizing and better integrating its portfolio of cyber resiliency features. 
  • Nutanix Data Lens offers a commendable SaaS-based data security solution for ransomware resilience and global data visibility with features such as proactive threat containment and alerting capabilities.
  • CTERA recently released Ransom Protect, a homegrown AI-based proactive ransomware detection solution that completes the company’s cyber resiliency feature set. 
  • Nasuni’s ransomware protection includes fast recovery, mitigation with plans for automated recovery orchestration, and a proactive ransomware detection solution that is being actively improved.

There is much more detail on each supplier and its offerings in the two GigaOm reports than we have highlighted here. You can download the two reports from Infinidat’s website – block Sonar here and file Sonar here – to pore through the details at your leisure.

Project Monterey: No word yet on Broadcom backing

Project Monterey, the joint venture between Dell, VMware, and Nvidia to put vSphere on a DPU (data processing unit) to offload storage and networking tasks from a host x86 CPU, appears to be in the R&D waiting room, with wannabe VMware acquirer Broadcom not publicly confirming support for it and VMware itself not offering even half-hearted backing.

Started in 2020 by VMware and Nvidia, the Monterey initiative was introduced as a new security model that offloads hypervisor, networking, security, and storage tasks from the CPU to the DPU.

Yet since Broadcom confirmed its intent to buy VMware, the fate of Project Monterey has been called into question. The EU Commission previously looked into the competitive implications of the acquisition and, among its concerns, said in December: “In 2020, VMware launched Project Monterey with three SmartNICs sellers (NVIDIA, Intel and AMD Pensando). Broadcom may decrease VMware’s involvement in Project Monterey to protect its own NICs revenues. This could hamper innovation to the detriment of customers.”

Fast-forward to July and the EC conditionally approved the takeover of VMware, saying: ”Broadcom would have no economic incentive to hinder the development of SmartNICs by other providers by decreasing VMware’s involvement in Project Monterey, an ongoing cooperation with three other SmartNICs sellers (NVIDIA, Intel and AMD Pensando), as such foreclosure would not be profitable.”

However, B&F asked Broadcom for its view on Project Monterey this month and a company spokesperson said: “I’m afraid Broadcom can’t answer this one,” which isn’t exactly a glowing confirmation of its involvement.

Others were more definitive. Dell told us: “We plan to continue our VMware collaboration to support vSphere Distributed Services Engine (previously known as Project Monterey) with our latest generation VxRail systems, which provides the most flexible configurations of any generation to date. We have enabled configurations for a variety of use cases including GPUs, DPUs and increased storage density with rear drives.”

VMware has a microsite for the vSphere Distributed Services Engine (vDSE) with text, a video, and a downloadable ebook. The intent is to provide supercharged workload performance with the DPU’s host server infrastructure offload functions. The structure behind this is to have a vSphere ESXi hypervisor on the DPU run VMware infrastructure app services: vSphere Distributed switch, NSX Networking, and NSX Distributed Firewall, storage services and infrastructure management, for example.

A diagram shows how these are related:

The future of Project Monterey

The text says that vDSE’s NSX Distributed Firewall is available as a tech preview; NSX Distributed Firewall being available as a beta feature in NVX v4. The ebook also notes that having bare metal Windows and Linux OSes on the DPU, plus Storage Services, and Infrastructure Management are not available in vSphere 8. Presumably we’ll need a future vSphere release to bring these functionalities to the DPU.

The ebook discusses a vendor ecosystem comprising Dell, Intel, HPE, Lenovo, Nvidia, Pensando, and VMware.

The future of Project Monterey

Substantial involvement. But recent comments from VMware at VMworld Explore in Las Vegas cast a shadow over this: the port of ESXi to Arm for the BlueField system was experimental. VMware CTO Kit Colbert said: “As we look forward we are continuing to evaluate supporting the mainboard processor.” 

All CEO Raghu Raghuram would say is that VMware is considering taking its virtual storage and networking tech to Arm. In other words, there is no current commitment to keep supporting the vDSE software stack on BlueField DPUs or other Arm-powered DPUs and SmartNICs.

Unlocking the potential of multicloud

SPONSORED POST: Imagine a stable, unified infrastructure platform and future proofed storage capacity that shrinks the configuration overhead, minimizes downtime and allows the IT team more time to anticipate and proactively address potential problems before they occur without being inconvenienced by having to address out of hours support issues?

Sound too good to be true? Well it’s not, according to this Modern Multicloud webinar from Dell – Achieve Greater Cost Control and IT Agility by Leveraging Dell APEX. In it you’ll hear Chris Wenzel, senior Presales manager at Dell Technologies, talk about how organizations across the world are finding ways to address their rising IT costs using unified infrastructure solutions.

Proving that point are Neil Florence, Head of Systems and Infrastructure for UK construction and engineering company NG Bailey and Neil’s colleague Stephen Firth, NG Bailey’s Infrastructure Manager. They outline how the company has worked hard to transform its own infrastructure and get its business onto a really stable platform over the last few years, having updated all of its core processes to help simplify its operations.

NG Bailey was accelerating the rate of SaaS adoption but worried that its traditional three tier approach to datacenter infrastructure wouldn’t fit the new model or help the business grow. To that end, and having taken the decision not to move everything into the public cloud due to associated cost risks, a natural evolution towards a multicloud environment was already underway.

At that point NG Bailey sought help and advice from Dell Technologies and Constor Solutions, a Dell Technologies IT solutions provider, including a strategic session at its Innovation Centre in Limerick which helped it understand how different Dell solutions could help the organization.

You can hear Neil and Stephen explain where Dell APEX Flex on Demand fits in helping to bring cost effective unified storage and data protection controls across multiple clouds, whether Microsoft Azure, AWS, Google Cloud and others – and whatever the applications and data they happen to host.

Watch the video here.

Learn more about the business value of Dell APEX here.

Sponsored by Dell.

50TB IBM tape drive more than doubles LTO-9 capacity

IBM has announced its latest TS1170 tape drive with 50TB cartridges, more than 2.5 times larger than LTO-9 cartridges.

IBM, which manufactures the LTO tape drives sold by itself, HPE, and Quantum, has its own proprietary tape formats used in its TS1100 series drives and Jx format media. The latest TS1170 drive supports 50TB JF media cartridges with 150TB compressed capacity through 3:1 compression. The prior TS1160, announced in 2018, supported JE media with capacities of 20TB raw and 60TB compressed.

IBM TS1170 tape drive
IBM TS1170 tape drive

The TS1170 drive operates at 400MBps raw throughput and supports both 12gig SAS and 16gig FC connectivity. It is also recognized as LTFS-ready. However, JF media is not backwards-compatible with the earlier JE specification.

The new tape cartridge media, also called 3592 70F, uses Strontium Ferrite particle technology. Fujifilm has demonstrated a 580TB raw capacity tape using Strontium Ferrite particles so there is a lot of runway for future capacity increases.

LTO-9 tapes hold 18TB raw and 45TB compressed, less than IBM’s JE format tapes. The LTO tape roadmap has a coming LTO-10 specification with 36TB raw/90TB compressed capacity. Where IBM uses a 3:1 compression ratio, the LTO group uses 2.5:1. An IBM TS4500 tape library, using the JF cartridges, will be able to hold the same amount of data as an LTO-9-based library but with more than three times fewer tapes thanks to IBM’s large capacity tape and greater compression. Even when LTO-10 tape technology arrives, the TS4500 will still have a better density in terms of TB per cartridge.

The LTO (Linear Tape-Open) organization was founded to provide an open industry standard format to replace various proprietary tape formats such as Quantum’s S-DLT and DLT. But since only IBM makes LTO tape drives and these drives use media from Sony and Fujifilm, IBM has a lock on the LTO tape format.

LTO group members HPE and Quantum are dependent upon IBM and can do nothing to get LTO tape cartridge capacity up to the IBM TS1170/JF level unless they develop their own drives. If they do this, they may have to lose some backwards-compatibility. With IBM’s dominant presence in the LTO space, competitors would need significant investment to establish a foothold.

Graid enhances RAID card speed with software update

Graid, the GPU-powered RAID card startup, says it has accelerated its RAID card speed following a software update.

The company’s SupremeRAID card utilizes on-board GPUs for RAID parity calculations and interfaces with a host server through the PCIe bus. The SR1010 model employs a PCIe gen 4 connection to a host. As the PCIe bus’s link bandwidth expands – moving from gen 3’s 8Gbps to gen 4’s 16Gbps, and further to gen 5’s 32Gbps – the RAID card’s IO speed correspondingly rises. The latest v1.5 software update from Graid introduces support for PCIe 5.

Leander Yu.

Leander Yu, Graid CEO and president, said: ”Gone are the days of IO bottlenecks … Customers investing in NVMe along with PCIe gen 5 infrastructure will experience unparalleled performance when deploying SupremeRAID to protect their data, giving customers the perfect platform for AI/ML, IoT, video processing, and other performance-hungry business applications.

The software supports RAID levels 0/1/10/5/6/JBOD while the core software license supports up to 32 native NVMe drives. v1.5 adds support for 8 drive groups, up from 4, better random IO on AMD servers, and support for Oracle Linux.

The updated software enables the card to deliver up to 28 million random read IOPS and a two-fold increase in sequential read bandwidth when attached to a host with dual 96-core AMD EPYC 9654 CPUs, 384GB of memory, and 24 x Kioxia CM7 NVMe SSDs, in RAID 5 and 6 configurations. A full performance spec table shows this and previous speed results:

Graid specs
B&F Table

Alternative RAID hardware cards have lower performance. Broadcom’s MegaRAID 9600 achieves up to 6.4 million random read IOPS in comparison.

At present, Graid offers support for both Linux and Windows hosts. However, current data indicates Windows performance lags behind Linux when deploying the SupremeRAID card. This disparity is expected to decrease as Graid promises enhanced functionality for Linux and improved performance for Windows server hosts in an upcoming software release.

Graid performance on Windows/Linux
Graid table

SupremeRAID software v1.5 supports both the SR1010 and SR1000 cards. This update will be accessible to all Graid customers as a complimentary upgrade, irrespective of their procurement channel.

TD SYNNEX gets into data migration transport biz

Distributor TD SYNNEX has a physical data migration offering using a Western Digital flash server chassis and MinIO object storage.

This is quite different from software-defined data migration services from suppliers such as Cirrus Data, Datadobi, Data DynamicsKomprise, which rely on network data transmission. It is akin to Seagate’s Lyve Drive Mobile Array offering, with its six drive bays for physically transporting data on the drives.

Matt Dyenson, SVP, Product Management at TD SYNNEX said: “Speed and efficiency are crucial to avoiding system downtime and, consequently, lost revenue during data migration, which can be a costly, frustrating and risky process for any organization.”

The SYNNEX service is based on a rental deal to physically migrate data using Western Digital’s Ultrastar Edge Transportable Edge Server which comes in a wheeled transport case. This has a 2RU chassis containing 40 CPU cores (2 x 20-core Xeon Gold 6230T 2.1GHz processors), a Tesla T4 GPU, and 512GiB of memory fronting 8 x Ultrastar DC SN640 NVMe 7.68TB SSDs, and 100GbitE networking.

Werstern Digital Ultrastar Edge transportable server.

That totals 61.44TB – not an especially large dataset capacity. Not when Solidigm has a single 61.44TB SSD announced. Data is stored using MinIO object storage with its erasure coding, encryption, and object locking.

Kris Inapurapu, Chief Business Officer at MinIO, played the repatriation card, saying: “Our high-performance, cloud-agnostic object storage perfectly complements TD SYNNEX’s suite of services. As customers migrate data and repatriate from the cloud they need a combination of resilience, security and logistical support – this solution delivers just that.”

Businesses can schedule windows to take delivery of the Western Digital hardware and pay for what they need during the rental period.

TD SYNNEX’s website has a Data Migration Service microsite that says the “offering delivers an integrated, tested solution that lets you safely and securely ship all the required components you need overnight to provide a quick, easy and cost-effective path for physically migrating data.”

The software-defined services above provide a framework within which data sources are scanned to identify a migration dataset, data set files (or blocks with Cirrus) are transmitted to a target system, the data movement is verified and, when migration is complete, a cutover process can be instituted. These framework elements are not included in the SYNNEX offering.

Instead SYNNEX says customers “select the date(s) for your data migration and we will ship the transportable migration platform to you directly. Rentals begin at a ten-day minimum.” There is nothing about how the data is extracted from the Ultrastar/MinIO chassis and moved to the destination system. This will be a manual process.

Tintri launches software-only VMstore and managed infrastructure service

DDN’s Tintri unit has unveiled a software-only rendition of its VMstore product and now offers VMstore through a managed infrastructure service framework.

Historically, Tintri’s VM-aware VMstore operated exclusively on its T7000 series hardware, using VMware storage abstractions for storage solutions. The separation of the VMstore software from Tintri’s hardware was announced as a Virtual Series project a year ago. Now the VMstore software, independent of the hardware, is termed the Tintri Cloud Engine (TCE), whereas the Tintri Cloud Platform (TCP) encompasses Tintri’s VMstore in a managed infrastructure capacity.

Phil Trickovic, Tintri SVP of Revenue, said: “The new Tintri Cloud Platform and Tintri Cloud Engine offerings are a testament to our commitment to providing customers with all of the tools they need to manage their infrastructure, no matter the size of their workloads or where they are on their hybrid cloud transformation journey.” 

Customers now have three avenues to access VMstore: provisioned on purchased Tintri hardware; consumed as managed infrastructure via TCP; or as software running in the public cloud through TCE.

TCE is containerized and runs on the AWS public cloud, serving as an AWS VM with EBS storage. Its primary purpose is to provide public cloud storage for on-premises Tintri workload-based snapshots, facilitating recovery from interruptions and ransomware incursions. TCE features real-time deduplication and compression, copy data management, and both real-time and predictive analytics.

TCP is marketed as a turnkey offering that promises “host-in-cloud and process-in-cloud capabilities.” Tintri says TCP’s potential uses include virtual datacenter (VDC), Infrastructure-as-a-Service (IaaS), and Disaster Recovery-as-a-Service (DRaaS). The VDC service is a private virtual resource pool, based on VMstore, with a self-service portal, unlimited internet traffic, and firewall protection. The resource pool consists of CPU, RAM, all-flash storage, and network delivered through an enterprise-grade managed cloud platform co-located in two carrier-neutral datacenters in the US – one in Reno, Nevada, and the other in Grand Rapids, Michigan.

Tintri datacenter locations

TCP IaaS includes compute, network, storage, security, backup, recovery, and disaster recovery with the flexibility to scale as needed.

TCP DRaaS, tailored for customers using VMware on-site or within the TCP VDC, facilitates the replication of virtual workloads either from on-premises sources to TCP or between TCP regions. Additionally, users can integrate various applications directly into their VDC from a dedicated marketplace.

All of a customer’s VMstore systems can be managed from a single console with Tintri Global Center. 

TCP is available today for new and existing customers. TCE is available for new and existing Tintri VMstore T7000 customers running on AWS. There’s a TCE datasheet here and TCP-VDC, TCP-IaaS, and TCP-DRaaS datasheets here.

Kioxia has killed its Kumoscale networked flash system

Quietly and with no fanfare, Kioxia has killed its Kumoscale networked flash array or JBOF. The deed was done three months ago, in May, with a note for partners: “Thank you for interest in KumoScale software (“Product”). There is no plan for enhancement beyond Version 3.22 as the Product has transitioned to maintenance only, and no new evaluation or production licenses will be granted. If you have any questions, please contact us.”

Joel Dedrick

“Kumo” is a noun meaning cloud in Japanese and Toshiba Memory set up its cloud scale SSD seven years ago.

Kioxia America, or Toshiba Memory America as it then was, recruited Joel Dedrick from being a consultant at Intel and previously SanDisk, to become VP and GM for its networked storage software business unit in September 2016. He says on LinkedIn that he was “recruited to build and drive a new product line” which was the KumoScale networked block storage software. Dedrick says he: “Defined [a] new product category; “networked block-storage node” to distinguish KumoScale from all-flash arrays and “JBOFs.”

But KumoScale was, to all intents and purposes, a JBOF. The networked block storage node concept signalled it was equivalent to a flash storage chassis in an external block storage array with some controller software functionality – an inferior external and scale-out all-flash SAN, in other words.

Dedrick’s team built up a software stack to run the hardware, with much use of OpenStack. There was no reinventing of existing software wheels. Instead the software stack was consistently enhanced. For example:

  • Nov 2020 – integrated KumoScale flash storage array into the Kubernetes world.
  • Dec 2021 – added admin tools and support for latest version of OpenStack.
  • June 2021 – added up-to-date OpenStack access control and open source integrations and increased its network access availability and bandwidth with a preview of multi-path networking support for NVMe-oF storage over TCP/IP networks.
  • April 2022 – v2.0 includes additional bare metal deployment options, seamless support for OpenID Connect 1.0, and support for NVIDIA Magnum IO GPUDirect Storage (GDS).
  • July 2023 – a cluster-wide Command Line Interface (Cluster CLI), compatibility with OpenStack Yoga multipathing, and interoperability with Microsoft Azure Active Directory.

The big issue was that disk and SSD supplier Toshiba did not want to make a full-scale SAN storage array, because to do so would pit it in competition against its own OEM customers who built their SANs with Toshiba SSDs. Rule number 1: suppliers shouldn’t compete with customers. So Dedrick attempted to define a middle ground product market category, between drives and full SAN arrays, which sidestepped that trap, but it did not have enough substance. Seven years after it was founded, the business unit has had its main product put into maintenance.

We have asked Koixia and it reiterated its earlier statement: “While KIOXIA is still supporting existing customers and KumoScale deployments there is no plan for enhancement beyond Version 3.22 as the product has transitioned to maintenance only, and no new evaluation or production licenses will be granted. We cannot comment on any additional details.”

Western Digital example

Western Digital faced the same kind of problems when it tried to build a datacenter storage business. It produced IntelliFlash, based on acquired Tegile disk and NVME SSD array technology, and ActiveScale archival array products. ActiveScale was based on HGST’s 2015 Amplidata object storage acquisition; WD having bought HGST in 2012.

The WD data center products business was killed off in September 2019 with the IntelliFlash product sold to DDN and the ActiveScale archival array to Quantum. WD still has its Ultrastar Edge Server rack chassis containing 8 x 7.68TB SN640 NVME SSDs for edge data collection and physical transport to a data center.

Storage drive manufacturers should not generally compete with their channel of storage hardware/software system builders. WD learnt that lesson in 2019 and now, four years later, Kioxia has too.

Seagate still has product lines built on and around its disk drives, including the Lyve Drive Mobile, Lyve Cloud and Exos RAID array product.

Pure Storage extends Azure Cloud Block Store to lower-cost SSD instances

Pure Storage’s Azure Cloud Block Store, its FlashArray Purity operating environment in Microsoft’s cloud, now supports Premium SSD v2 Disk Storage instances. This feature separates storage from compute to reduce costs and is in preview for the Azure VMware Solution (AVS).

Update: Pure Storage table entries added and cost saving quote updated. Lightbits table entries updated, 23 Aug 2023. Silk table entries updated, 24 Aug 2023. Volumez table entries updated, 25 Aug 2023.

Introduced in 2021, Cloud Block Store (CBS) for Azure enhances Microsoft’s fully managed VMware-as-a-Service offering – the Azure VMware Solution (AVS). Purchased by customers based on host nodes, AVS integrates compute, memory, network, and vSAN storage. However, this integration can escalate costs for those simply seeking increased storage since it necessitates the concurrent purchase of added compute and networking resources.

Pure chief product officer Ajay Singh said: “This expanded partnership between Pure Storage and Microsoft creates a significant milestone, ushering in a new age of cloud migration, and ultimately driving faster, more cost effective adoption of cloud services.”

Upon its debut, CBS for Azure employed Azure Ultra Disk Storage instances, block-level storage volumes paired with Azure Virtual Machines. The Azure alternatives encompass premium SSDs, Premium SSD v2, standard SSDs, standard HDDs, and the Elastic SAN (currently in preview), which consolidates a customer’s Azure storage needs. A table summarizes their characteristics:

Pure now offers CBS on Premium SSD v2

As the table shows, Premium SSD v2 offers 80,000 IOPS, half of Ultra Disk, yet quadruple that of a regular Premium SSD instance (20,000 IOPS). Its throughput stands at 1,200MBps, below the 4,000MBps of UltraDisk but above the Premium SSD instance’s 900MBps.

Pure says it has worked with a new version of CBS to make its use of Premium SSD v2 instances as fast as before. Cody Hosterman, Pure’s senior director of product management for cloud, told us: “Based on Premium SSD v2 inside of Azure, we were able to take this less expensive tier versus Ultra without any performance change on our side.”

Further benefits of CBS on Premium SSD v2 include immutable Safemode snapshots, compression, deduplication, thin-provisioning, multi-tenancy, encryption, disaster recovery, and high availability. Importantly, it allows for flexible storage scalability independent of compute resources, a limitation of the Ultra disk instances. Hosterman said “Our new premium model is 1/3 the cost of the previous model, … but the overall savings to their cloud storage bill is usually more like 40 percent.”

He says this enables customers to migrate on-premises workloads more cost-effectively, with potential reasons including business continuity and DR, datacenter expansion to include the cloud or the reverse – datacenter reduction and to use VDI.

Microsoft and Pure’s collaborative efforts have paved the way for CBS Azure to serve as an external block storage option for AVS. Pure says this is the first external block storage for VMware Cloud. Microsoft built a framework with its PowerShell to enable vSphere VMFS (Virtual Machine File System) support. Pure integrated this AVS PowerShell with its own PowerShell SDK to make a new version of its Plugin in VMware Cloud Manager enabling Azure AVS customers to use Cloud Block Store.

Hosterman said a 100TB AVS configuration using vSAN could need 10 AV36 host nodes. Moving to Pure’s CBS reduces that number to three and provides cost savings, he claimed – 56 percent on hourly commit, 50 percent on one-year reserved and 42 percent on three-year reserved rates.

Singh said: “Pure Cloud Block Store for Azure VMware Solution is just the beginning. By optimizing performance and cost at scale, we look forward to unlocking the number of mission-critical use cases that we can serve in the coming years.”

In simpler terms, with these enhancements, migrating on-premises, storage-intensive database workloads to Azure becomes more economical.

Competition

Azure’s block storage space also hosts other third-party suppliers: Lightbits, Silk, and Volumez. A preliminary comparison, based on publicly available documentation, outlines their offerings relative to Pure on the Azure platform. However, this should be regarded as a basic overview rather than a comprehensive analysis:

For comparison, AWS block storage competitors include the likes of Infinidat and Dell’s PowerFlex (formerly ScaleIO).

Databricks reportedly seeking VC funding

AI-focused analytics lakehouse supplier Databricks wants to raise more funding to continue its breakneck expansion and aims to overtake Snowflake as the largest data analytics company in the world.

Databricks supplies a data lakehouse, a combination of a data warehouse and data lake, and was founded in 2013 by the original creators of the Apache Spark in-memory big data processing platform. It has raised $3.5 billion in funding through nine funding events and is heavily focussed on AI/ML analytics workloads. Databricks had a $38 billion valuation in 2021. In August 2022 Databricks said it had achieved $1 billion in annual recurring revenues, up from $350 million ARR two years prior, but it did not say it had a positive cash flow.

Both Silicon Angle and The Information report Databricks wants to raise hundreds of millions of dollars. The two mention sources close to the company that say Databricks made an operating loss of $380 million in its fiscal 2023, which ended in January, and has lost around $900 million in its fy2023 and fy2022 combined.

Databricks pulled in a massive $2.6 billion in funding in 2021 and embarked on an acquisition spree to buy in AI/ML-related software technology:

  • October 2021 – 8080 Labs – a no-code data analysis tool built for citizen data scientists,
  • April 2022  – Cortex Labs – an open-source platform for deploying and managing ML models in production,
  • Oct 2022  – DataJoy – which raised a $6 million seed round for its ML-based revenue intelligence software,
  • May 2023 – Okera – definitive agreement to buy AI-centric data governance platform,
  • June 2023 – Rubicon – storage infrastructure for AI,
  • June 2023 – Mosaic – definitive agreement for $1.3 billion in a what Databricks tells us is a mostly stock deal. Buy completed 19 July 2023.

Charting these with its funding rounds gives an indication of Databricks’ fund raising and acquisition spending:

Blocks & Files chart.

The cost of the Mosaic acquisition was $1.3 billion while the other five acquisitions were for undisclosed amounts. In essence Databricks pulled in a lot of cash in 2021 and then spent a lot of cash in 2021 (buying 8080 Labs), 2022 (buying Cortex Labs and DataJo) and so far this year (buying Okera, Rubicon and Mosaic). That was in addition to its normal cash burn growing the company with marketing spend on trade shows, etc.

Databricks sees the generative AI market as a huge opportunity to grow its business substantially. For that it needs more cash. We’ve asked the company for a comment and were told: ”Databricks won’t comment on this occasion.”